"Neural Lace," Extended Cognition, and Privacy

Imagine a world, not as distant as we might like to think, where our individual thought processes are aided and improved by technologies external to the biologically-bequeathed neural matter that sits within our skulls and throughout our nervous systems. Further, these technologies are designed and optimized to perform these functions in such a way as to become automatic or invisible to their user. And rather than act as simple one-way conduits or repositories, they actively drive their user's thinking in a manner that creates a two-way, symbiotic interaction between human and device. This interactive link between user and external object thus becomes so critical to the overall reasoning ability of the user that the removal of the object directly results in a decrease in the user's overall cognitive abilities.
 
Let us also assume that, in this world, we have encountered the same questions about access to private data that we do in our world. Specifically, how do we define a "reasonable expectation of privacy" as it is currently understood under the Fourth Amendment to the U.S. Constitution, which regulates the government's ability to search or seize citizens' information? And how should we regulate---if at all---the "data capitalism" currently exercised by corporate giants like Google and Facebook (along with countless other entities)? 
 
Recently, Elon Musk announced the launch of a new company whose goal is to create direct links between human brains and computers. The "neural lace" technologies being researched by this company will "allow people to communicate directly with machines without going through a physical interface," giving humans the ability "to achieve higher levels of cognitive function." 
 
The world I have described--and being brought to reality through Musk's company--borrows from Clark's and Chalmer's work on the nature of mind and cognition, specifically the question of "where does the mind stop and the rest of the world begin?" Their theory of active externalism raises a number of interesting questions as to our relationship to our data, the choices we make---consciously and unconsciously---about the use of these data, and our rather confused and inchoate ideas about individual data privacy.
 
To illustrate just one aspect of this, we can look to the current debate over strong encryption on user devices, specifically, the Apple iPhone. Briefly, when the FBI and other law enforcement agencies wish to examine the contents of a suspect's iPhone (let us assume for these purposes that the FBI has obtained a warrant for this information), they sometimes find themselves stymied by the strong encryption Apple has made available through later versions of their hardware and software. Under previous versions of these devices, Apple has been able to assist law enforcement by unlocking these un- or lightly-encrypted phones. In the later versions, however, Apple has taken themselves out of this loop, creating encryption mechanisms that have no back door or master key. This has meant that recent (lawful) requests of Apple by law enforcement agencies to obtain data from these newer devices have been rejected.
 
U.S. Magistrate Judges have relied upon laws such as the All Writs Act to try to compel Apple to provide a solution to this problem. Apple, and other similarly-situated technology companies, have protested these orders on a number of bases both legal and technical. Senior law enforcement officials have responded by stating that no door (real or virtual) should be impervious to law enforcement keys.
 
The implications of this philosophy of all-seeing law enforcement become quite serious if we were to apply it to our imaginary world of extended cognition. For example, what happens to this equation if our symbiotic cognitive relationships with these objects become so seamless that we no longer have any conscious control over the flow of information to---and stored by---these devices? Further, what if the information we send these devices could be used to reconstruct our inner thought processes? How, then, do we consider the question of government access to individual data? Is there anything that would or should be off limits, even to a warrant or court order?
 
This is but one example of the privacy questions that I believe would need to be reexamined in our world of extended cognition. Other questions might fall along the lines of legislative and judicial interpretations of the extended cognition theory; if extended cognition can be viewed as a spectrum, what lines might be drawn as the objects become further attenuated?; would U.S. jurisprudence such as third-party doctrine apply? I am in the very early stages of framing this extended project, and am testing the waters as to its usefulness and viability. Comments and suggestions are welcome.
 

Add new comment