Stanford CIS

Why it's dangerous to outsource our critical thinking to computers

By Brett Frischmann on

The lack of transparency around the processes of Google’s search engine has been a preoccupation among scholars since the company began. Long before Google expanded into self-driving cars, smartphones and ubiquitous email, the company was being asked to explain the principles and ideologies that determine how it presents information to us. And now, 10 years later, the impact of reckless, subjective and inflammatory misinformation served up on the web is being felt like never before in the digital era.

Google responded to negative coverage this week by reluctantly acknowledging and then removing offensive autosuggest results for certain search results. Type “jews are” into Google, for example, and until now the site would autofill “jews are evil” before recommending links to several rightwing antisemitic hate sites.

That follows the misinformation debacle that was the US general election. When Facebook CEO Mark Zuckerberg addressed the issue, he admitted that structural issues lie at the heart of the problem: the site financially rewards the kind of sensationalism and fake news likely to spread rapidly through the social network regardless of its veracity or its impact. The site does not identify bad reporting, or even distinguish fake news from satire.

Facebook is now trying to solve a problem it helped create. Yet instead of using its vast resources to promote media literacy, or encouraging users to think critically and identify potential problems with what they read and share, Facebook is relying on developing algorithmic solutions that can rate the trustworthiness of content.

This approach could have detrimental, long-term social consequences. The scale and power with which Facebook operates means the site would effectively be training users to outsource their judgment to a computerised alternative. And it gives even less opportunity to encourage the kind of 21st-century digital skills – such as reflective judgment about how technology is shaping our beliefs and relationships – that we now see to be perilously lacking.

Read the full piece at The Guardian.