Making Smart Machines Fair

"A first step for the professors is to measure the cultural bias in the standard data sets that many researchers rely on to train their systems. From there, they will move to the question of how to build data sets and algorithms without that bias. “We can ask how to mitigate bias; we can ask how to have human oversight over these systems,” says Narayanan. “Does a visual corpus even represent the world? Can you create a more representative corpus?”"