Is Technology Neutral? Part II

In a very well thought out September 1 blog post, Sanjana asked “Is technology neutral?” partially in response to my own post on the subject from a few weeks ago.

I think the disconnect between our perspectives stems from confusion over the object we are discussing. Technology can be observed in the abstract, or in its concrete applications. Technology in the abstract is neutral, I still maintain, but specific applications of technology may not be. The use of a computer to process data is an abstract concept, and neutral as to its application. A database system to track supposed terrorists, however, is not neutral. The optimal shape of an airplane wing to generate lift is neutral; a stealth bomber is not neutral. Along those lines, HTTP and TCP/IP are neutral, but a website for the Aryan Nations (or Greenpeace) is not neutral.

Sanjana’s quote from Is Technology Neutral? – A Funny Question makes this point rather well: “In other words, it’s the final use of ICT in a specific setting, and ontological perspective that will ultimately decide if technology is neutral or not; it does not make sense to address this question when technology is detached from the context it is meant to be a part of.”

I think that technology adopts the biases of the individual or individuals who use it to achieve a particular end. If a programmer wants to build an online dispute resolution process that disadvantages everyone from a European country, they can do so using technology. That online dispute resolution process, then, is no longer neutral. But the technology that was used to make the process is, at essence, neutral. For example, the process could have been made to advantage everyone from European countries just as easily. That agnosticism as to the advantage demonstrates the fundamental neutrality (or more accurately, impartiality, as I discuss below) of the underlying technology.
Sanjana says: “Those who believe that technology is neutral argue that ‘guns don’t kill people, people do’, or that a knife can be used to ‘cook, kill, or cure.’” I disagree with that. I do believe technology is neutral, but I think the “guns don’t kill people” argument is a facile one. The discovery in 800 A.D. that saltpeter, sulfur, and charcoal can make gunpowder had no moral agenda. However, 400 years later, when people used that discovery to revolutionize warfare, that technology became almost the definitive modern form for aggression.

As Sanjana notes, much of this problem comes from the word “neutral,” which has long been controversial in the dispute resolution field. The concept of neutrality is an unachievable ideal, synonymous with absolute purity from bias of any kind. Of course almost nothing attains that standard. Using the term “impartiality” instead of “neutrality” is much more accurate, in my opinion, as that standard (not having clear bias toward one side or the other) is usually more realistic and achievable.

Fundamentally I agree with Sanjana when he writes, “…technology only plays a supporting role to (socio-political) processes that are engineered by humans.” Along these lines, Suzanne Mikawa’s point focuses on the social, political, and cultural implications of “leaving technologies behind.” This clearly focuses more on applied technology than technology in the abstract. The ability or inability of a culture or society to handle a new applied technology says more about the society than about the underlying technology itself. If we decode the genome that knowledge in an of itself has no agenda; but if individuals start to use that information to, for instance, engage in eugenics or enable humans to live to 150, then the application of that technology is non-neutral. Some individuals may have an agenda when they introduce particular applied technologies into new areas (internet into rural Africa, AK-47s into Angola, or arsenic-based gold mining into Brazilian rainforests) but the underlying technologies themselves are neutral to the application. Some applications of technology, like handguns, definitely affect human behavior, often for the worse. But the effect is a result of humans, who are the variable in the equation, not the underlying technology itself.

In essence, these debates are purely academic. I sense that there is probably top line agreement on the substance, and our disagreements are ones of terminology and imprecision in our definitions. The more important social challenge, though, is to use our influence within society to focus the power of technology on the nobler goals of humanity: coexistence, social justice, understanding, and peaceful conflict transformation. There will always be those actors who will do all they can to use the tools of technology to work against those goals, but I believe over the long term all of us working together will be able to insist that the power of technology be devoted instead to realizing them.

Add new comment