The Center for Internet and Society at Stanford Law School is a leader in the study of the law and policy around the Internet and other emerging technologies.
Error messageDeprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in Drupal\gmap\GmapDefaults->__construct() (line 107 of /mnt/w/cyberlaw/docs/prod/sites/all/modules/contrib/gmap/lib/Drupal/gmap/GmapDefaults.php).
Before the novel coronavirus arrived on its shores, the United States had spent decades becoming a heavily digitized society. Now, the pandemic is deepening that dependence on digital technology, converting millions of in-person interactions into online communications. That dependence means good cybersecurity, including strong encryption, has become more crucial than ever.
There is ample evidence to suggest that digital technologies are being designed and deployed not only to surveil and nudge us toward certain consumer preferences, but to train us to act like predictable machines. In the absence of an established framework for assessing these effects, we need a new test of humanity lost.
Digital contact tracing apps have emerged in recent weeks as one potential tool in a suite of solutions that would allow countries around the world to respond to the COVID-19 pandemic and get people back to their daily lives. These apps raise a number of challenging privacy issues and have been subject to extensive technical analysis and argument. Read more about Contact Tracing, Governments, and Data
The unprecedented threat from the novel coronavirus has confined many Americans to their homes, distancing them from one another at great cost to local economies and personal well-being. Meanwhile the pressure grows on American institutions to do something—anything—about the pandemic.
ABSTRACT: Artificial Intelligence (AI) technologies have the capacity to do a great deal of good in the world, but whether they do so is not only dependent upon how we use those AI technologies but also how we build those AI technologies in the first place. The unfortunate truth is that personal data has become the bricks and mortar used to build many AI technologies and more must be done to protect and safeguard the humans whose personal data is being used. Read more about The Ethical Use of Personal Data to Build Artificial Intelligence Technologies: A Case Study on Remote Biometric Identity Verification