Amazon Needs to Stop Providing Facial Recognition Tech for the Government

Author(s): 
Publication Type: 
Other Writing
Publication Date: 
June 21, 2018

Imagine a technology that is potently, uniquely dangerous — something so inherently toxic that it deserves to be completely rejected, banned, and stigmatized. Something so pernicious that regulation cannot adequately protect citizens from its effects.

That technology is already here. It is facial recognition technology, and its dangers are so great that it must be rejected entirely.

Society isn’t used to viewing facial recognition technology this way. Instead, we’ve been led to believe that advances in facial recognition technology will improve everything from law enforcement to the economy, education, cybersecurity, health care, and our personal lives. Unfortunately, we’ve been led astray.


Procedural Pessimism

After an outcry from employees and advocates, Google recently announced it will not renew a controversial project with the Pentagon called Project Maven. It also released a set of principles that will govern how it develops artificial intelligence. Some principles focus on widely shared ideals, like avoiding bias and incorporating privacy by design principles. Others are more dramatic, such as staying away from A.I. that can be weaponized and steering clear of surveillance technologies that are out of sync with internationally shared norms.

Admittedly, Google’s principles are vague. How the rules get applied will determine if they’re window dressing or the real deal. But if we take Google’s commitment at face value, it’s an important gesture. The company could have said that the proper way to get the government to use drones responsibly is to ensure that the right laws cover controversial situations like targeted drone strikes. After all, there’s nothing illegal about tech companies working on drone technology for the government.

Indeed, companies and policymakers often seek refuge in legal compliance procedures, embracing comforting half-measures like restrictions on certain kinds of uses of technology, requirements for consent to deploy technologies in certain contexts, and vague pinkie-swears in vendor contracts to not act illegally or harm others. For some problems raised by digital and surveillance technologies, this might be enough, and certainly it’s unwise to choke off the potential of technologies that might change our lives for the better. A litany of technologies, from the automobile to the database to the internet itself, has contributed immensely to human welfare. Such technologies are worth preserving with rules that mitigate harm but accept reasonable levels of risk.

Facial recognition systems are not among these technologies. They can’t exist with benefits flowing and harms adequately curbed. That’s because the most-touted benefits of facial recognition would require implementing oppressive, ubiquitous surveillance systems and the kind of loose and dangerous data practices that civil rights and privacy rules aim to prevent. Consent rules, procedural requirements, and boilerplate contracts are no match for that kind of formidable infrastructure and irresistible incentives for exploitation.

Read the full piece at Medium.