The dangers of tech-driven solutions to COVID-19

Author(s): 
Publication Type: 
Academic Writing
Publication Date: 
June 17, 2020

Imagine a world in which governments and tech firms collaborate seamlessly and benevolently to fight the spread of COVID-19. Public-health officials use automated proximity-detection systems to help them with contact tracing, and they carefully protect people’s personal data in order to build and maintain public trust. Social media platforms facilitate the widespread release of government public service announcements, which include clear information about the virus, the disease, and recommended mitigation strategies, and public officials reinforce that information with appropriate responses.

Now consider the world we have, in which governments and firms are responding to the pandemic in a less coordinated, more self-interested fashion. Although few sensible people have anything good to say about the federal government response, reactions to tools for managing the pandemic designed by tech firms have been more mixed, with many concluding that such tools can minimize the privacy and human rights risks posed by tight coordination between governments and tech firms. Contact tracing done wrong threatens privacy and invites mission creep into adjacent fields, including policing. Government actors might (and do) distort and corrupt public-health messaging to serve their own interests. Automated policing and content control raise the prospect of a slide into authoritarianism. 

Recent events around the world and in the United States demonstrate that the threat of a slide into authoritarianism is real. But we think it is also clear that entrenched habits of deferring to private-sector “solutions” to collective problems have undermined our capacity for effective pandemic response. What’s more, failures to hold tech firms accountable for their uses of personal information have actually made us more vulnerable to prolonged, uncontainable outbreaks.

We are not the first to sound alarm bells about the role of platforms in facilitating the public-health response to COVID-19. But most critics have focused narrowly on classic privacy concerns about data leakage and mission creep—especially the risk of improper government access to and use of sensitive data. Apple and Google released an application programming interface (API) to enable apps for proximity tracing and exposure notification tailored to address those criticisms. But that approach fails to address more fundamental obstacles to creating a safe and sustainable system of public-health surveillance, and it also creates new obstacles.

Enshrining platforms and technology-driven “solutions” at the center of our pandemic response cedes authority to define the values at stake and deepens preexisting patterns of inequality in society. It also ignores platforms’ role in fostering and profiting from the disinformation that hobbles collective efforts to safeguard the public’s health. Effective, equitable pandemic response demands deeper, more structural reforms regulating the platforms themselves.

Read the full piece at Brookings.