How Human Do We Want Our Robots To Be?: A Future Tense Event Recap.

"The central issue may come down to what Christine Rosen, senior editor of the New Atlantis, called “the Stepford Wife problem,” which she described as the probability that we’ll end up with emotional attachments to our robots. But Woodrow Hartzog, a law professor at Samford University and the owner of a Roomba nicknamed Rocko, argued that there’s nothing wrong with developing an emotional attachment to a robot. Still, issues do arise, he said, when we trick ourselves into believing that those nonhuman entities can reciprocate our affection. In other words, we should worry less about killer robots than deceptive ones, whether their deceptions arise by accident or design.

This concern came up in a different way when Newman asked the panelists whether robots should be allowed to lie to their human owners. Hartzog insisted that such questions force us to remember that robots are essentially tools. As such, whether a robot should lie depends on its basic purpose. Patric Verrone, writer and producer of Futurama, noted that it’s sometimes frustrating when a spell checker repeatedly informs us of our errors, but at core that’s a spell checker’s job—we wouldn’t want it to deceive us, even if it could make us happier by doing so. But in other circumstances, it might be different. For instance, with a care-taking robot in a hospice setting, Hartzog suggested, “Brutal honesty could be horrible, right?”"