Social networks have gotten a lot of play in recent years. What about social devices? I've been thinking about whether/how the nature of computer interfaces is changing—specifically, becoming less passive and more “social.”
My conversations with academics in Stanford's Department of Communications, and the research they've guided me toward, leads me to believe that we are once again at the edge of a shift in the way we communicate. For a variety or reasons, PCs and other computers in cars, mobile devices, etc., are making increased use of voice-driven, natural language interfaces or avatars, moving computing away from the traditional mode of passive information processing toward a more social, "person to person" interaction.
Some quick examples. Google's VP of Search gave a recent interview at Le Web during which she said that Google was exploring a more conversational interface that would allow users to actually ask Google questions out loud as though conversing with a person. Although it has met with (comic) resistance in the past, a trail of Microsoft patents going back ten years shows how serious the company is about developing a social interface, complete with voice, expressions, and gestures. As much as twenty-five percent of Microsoft's research efforts reportedly involve artificial intelligence. Even the U.S. government has gotten into this game: the U.S. Army’s virtual recruiter, SGT Star, responds to questions out loud, changes moods, makes jokes, etc. According to developer statistics, SGT Star has responded to over two million questions since his debut in 2006.
Meanwhile, as psychology and communications scholars such as Clifford Nass and Byron Reeves have exhaustively demonstrated (for instance, in The Media Equation), people respond to social machines as though they were truly human. In games of cooperation, we make and keep promises (only) to computers that present as social agents. We donate more in charity experiments when faced with a picture of a human-looking robot. We refuse to take advice from robot caregivers (like Nursebot Pearl at Carnegie Melon) unless they present as sufficiently human. Researchers explain this largely subconscious phenomenon in one of two ways: by citing to the fact that we evolved at a time when human-looking things were in fact human, or by pointing to the related insight that humans are over-attuned to other humans so as to capitalize on our greatest evolutionary advantages of language and cooperation.
The upshot is twofold: artificial agents will increasingly mediate our communications activities, and this mediation will impact how, and likely what, we communicate. BJ Fogg and Ian Kerr have independently looked at the ethical ramifications of using computer agents to persuade. I’m writing about whether privacy harms may flow from the introduction of social agents into historically private spaces and information transactions (such as search). Others may find yet new angles. Be on the watch for social machines as an emerging communications issue.