ABSTRACT: Artificial Intelligence (AI) technologies have the capacity to do a great deal of good in the world, but whether they do so is not only dependent upon how we use those AI technologies but also how we build those AI technologies in the first place. The unfortunate truth is that personal data has become the bricks and mortar used to build many AI technologies and more must be done to protect and safeguard the humans whose personal data is being used. Through a case study on AI-powered remote biometric identity verification, this paper seeks to explore the technical requirements of building AI technologies with high volumes of personal data and the implications of such on our understanding of existing data protection frameworks. Ultimately, a path forward is proposed for ethically using personal data to build AI technologies.
View the full paper here.