Last week, the world got a preview of how Google and Apple’s contact tracing project might look and function. Some privacy and security experts have expressed cautious optimism that the effort could be a potentially useful tool to aid public health contact tracers while protecting privacy.
The project modifies the iOS and Android systems to allow government health agencies to build apps that use a mobile phone’s Bluetooth communication capabilities. These apps would make it possible for a person who tests positive for the coronavirus to send out an “exposure” notification to the phones of other app users to alert them that their phones had been in the vicinity of the infected person’s phone during a given period. People getting this information could decide to self-isolate or get tested. The app would not reveal anyone’s identity.
To protect privacy, the system only uses Bluetooth, does not collect location data, hides a user’s identity, requires permission to collect proximity data or upload data from the phones of people who test positive for COVID-19, with all the data stored on a user’s phone unless the user decides to notify others. Additionally, the companies will require users to enter a unique code provided by health authorities to declare themselves as infected.
But even the most privacy protective contact tracing apps have weak points. As many have pointed out, anonymous cellphone-based tracing can never be a substitute for the detailed work that trained human contact tracers have to do. Even putting critical questions of effectiveness aside, there are at least three concerns to keep in mind about relying on technology to mitigate the COVID-19 crisis.
Read the full piece at the Los Angeles Times.