Stanford CIS

Does AI Ethics Need to be More Inclusive?

By Patrick Lin on

Last week, MIT Media Lab researchers published results from its global survey on autonomous driving ethics.  The survey, dubbed the “Moral Machine experiment”, generated the largest dataset on public attitudes in artificial intelligence ethics—asking questions such as whether it’s better to crash into one person to save five others, or to let the five die.

But some observers have suggested that it’s not large or diverse enough.  Despite gathering 40 million decisions—made by more than two million people, in ten languages from 233 countries and territories worldwide—it’s true that they survey leaves out “voices in the developing world”, such as Yemen, Ethiopia, Namibia, Tajikistan, Suriname, Guyana, Greenland, and others.

How important is it to include those missing perspectives?  The worry is that any decisions about how autonomous vehicles (AV) ought to be designed, if influenced by the MIT survey, won’t be fully informed without those unheard voices.

Imagine if it were decided that, in an unavoidable crash, an AV should prioritize the life of children over older people, given “the strong preference for sparing children” in the survey results.  Some cultures, though, might place greater value on their elders than others.  For those cultures, that programming decision might not sit right with them, creating a clash with their basic values and leading to low adoption rates of the technology or worse effects.

Meanwhile, there’s growing awareness that we need more inclusive and diverse discussions about AI for the global good, not just for interests in the Western world.  These include some efforts affiliated with the United Nations and other international powers.

Read the full piece at Forbes.