Stanford CIS

People Can Be So Fake

By Ryan Calo on

I've blogged before about the impact of anthropomorphic interfaces and devices.  I've recently written an article on the subject. In it I point out that we're using voice-driven and other human-like interfaces more and more.  They grab our attention and free up our hands for others tasks.  And they can help us accept machines---such as personal or service robots---for a whole new set of tasks.

Psychologists and communications scholars will tell you, however, that our brains are hardwired to treat these "fake" people as though they were real, including with respect to the feeling of being observed and evaluated.  That means that we react to such technology, behaviorally and physiologically, as though a person were really present.

This could be bad for privacy.  Privacy scholars will tell you that its not good for us to always feel like we're surrounded by others.  We need "moments offstage," to use Alan Westin's famous formulation.  It could also be good for privacy, particularly on the Internet.  Using avatars instead of privacy policies that no one reads or understands could help shore up the failing regime of online notice.

You can view the article here.  As of today, it's looking for a good home.

Published in: Blog , Privacy , Notice by Design