Computers are Autistic

computers are autistic

by Stephen Palmer

____________________________________________________________________________________

Picture the scene. In a classy anime-noir from Japan, two characters are discussing data crossing the Net, with reference to their own difficulties, which involve aspects of personality being downloaded, memories being snatched, consciousness being hacked into, and many other plot devices from when people thought computers were the best thing since sliced bread.

Oh, that’s today, isn’t it?

I have a problem with the modern obsession for describing the human mind in computer metaphors. The computerisation of our Western environment — extrapolated so well in Gwyneth Jones’ classic Escape Plans — is causing us now to look at ourselves in terms of the computer, instead of humanity. Computers have become so pervasive we are beginning to model ourselves on them. There is a myth that top-flight supercomputers are now so fast and powerful they are outstripping the human mind, and usually this is measured in MHz speed, with no thought to the immeasurable speed and memory of the human brain. There are ten trillion neurons in a typical brain. It puts a few Gb of disk space and some RAM rather in its place.

But even this sort of talk is defeating the object of my article. There is no comparison between computers and the human mind, and I should not bother mentioning them in the same paragraph. I might as well discuss the relationship between cheese and chalk.

William Gibson’s Neuromancer trilogy provides many marvellous examples of the failure of the computer metaphor. In Neuromancer itself the matrix becomes sentient, apparently as a result of two AIs merging. What did Gibson mean by this? Did he mean that the matrix became animal-like, with intentions and behaviour, or did he mean, as I suspect he did, that the matrix became conscious?

“I’m the matrix, Case . . . I’m the sum total of the works, the whole show.”

But how could the matrix become conscious? What would it need to be described so? Well, it would certainly need a self-symbol. But if this self-symbol is just — as the aleph of Count Zero seems to be — a model of the matrix, then it too would contain a model of itself, and so on, back to infinity. No, the self-symbol must be the combination of constituent parts; an emergent phenomena, like colour appearing from colourless molecules. But if this is the case, then the matrix must have some reason for developing such a symbol, and the only source of such a reason is the experience of other entities like itself. Yet the matrix is everything. There are no other entities. It’s clear from Mona Lisa Overdrive that the Alpha Centauri entity developed separately from the Earth one:

” . . . when the matrix attained sentience, it simultaneously became aware of another matrix, another sentience.”

We are forced to the conclusion that the matrix, like the Internet, could never develop consciousness even remotely like ours. Now if it were to irrevocably split into millions of conceptually separate entities, that would be different. Forced to interact, those entities, as they evolved, might need to judge and guess the motives and behaviour of others, and so might develop the self-symbol necessary for the appraising of others. But even this would be different from the matrix itself becoming sentient, since on such a view the separated entities, not the matrix, would develop self-symbols. There is a hint that such an event was in Gibson’s mind, since the post-Neuromancer matrix is characterised by voodoo deities, but unfortunately these appear after When It Changed.

Another aspect of Gibson’s cyberpunk work is the concept of a person ‘travelling’ through cyberspace, which is depicted Tron-like as a three dimensional grid. A fascinating passage in Count Zero runs thus:

“The Wig explained to the Finn that his technique of mystical exploration involved projecting his consciousness into blank, unstructured sectors of the matrix and waiting.”

What are we to make of this? First of all, we have to imagine what supports our own conscious mind. There is only one possible answer: our senses. Without input through our five senses, what would there be? Experiments in sensory deprivation tanks have shown an answer, for people deprived of sensory input begin hallucinating, their end-point presumably madness. I believe failure of sensory input to be the conceptual equivalent to death. Gibson is asking us to imagine that the Wig is hanging in non-space, blank and unstructured, receiving little, or possibly even no sensory data. Yet later there is mention of sensing presences moving across cyberspace. So the Wig must be receiving input of some sort; presumably visual. This sort of confusion however helps us get down to the basics of how people imagine consciousness. Gibson is pushing us to imagine a mind effectively freed of its body.

Well, in his favour, the concept of telepresence is familiar today, and those who have experienced it remark on how receiving remote images seems to change the ‘position’ of the conscious mind in space. We imagine ourselves to be directly behind our own eyes. Thus, people wearing VR helmets into which cameras send visual information seem to ‘be in another place’, the place shown by the camera, and the effect is heightened if the movements of their head changes the orientation of the camera. Experiments have recently been performed on the synthesis of ‘false touch’ using computer controlled vibrations.

The problem comes when that staple of SF, the projection of the mind involves purely abstract values, as most often it does. There is one fundamental difference between human beings and computers, and that is that human beings are physically separate though conceptually linked (by society), whereas computers are both physically and abstractly linked. The whole point of the Internet, and the matrix, is this linkage, and it is one reason why computers will never gain consciousness if they are set up as at present.

In Memory Seed I was careful to say:

“(The noophytes) are an emergent phenomenon born of the private nature of consciousness . . .”

And:

“(A noophyte) is a partial or fractured model of reality — an abstract model. Thus, we human beings are noophytes, except that most philosophers would judge human beings to be almost complete models of reality . . .”

The noophytes are not conscious like people. They are abstract bundles of data so immense they have organised themselves into models of personality. In Glass, Tanglanah, one of the noophytes who has returned to living in a discrete body, has become conscious because she has been forced to experience and therefore understand the people and other entities around her. She understands the mistake:

” . . . for us to live in harmony with our environment we must all become embodied. Minds and bodies are not separate entities, not dual creations, rather they are one… We must feel the world, not intellectually appreciate it, and so acquire intuition.”

One of the classic explorations of the apparent duality of mind and body, and the concept of consciousness, came in Rudy Rucker’s Software. Cobb Anderson designs the first robots with free will, then retires to become an aged, Hendrix-loving hippy. Then he is offered the chance to leave his ailing body and have a new one. The robots (now called boppers) make good their promise, leaving Cobb to reflect along the following lines:

“A robot, or a person, has two parts: hardware and software. The hardware is the actual physical material involved, and the software is the pattern in which the material is arranged. Your brain is hardware, but the information in the brain is software. The mind… memories, habits, opinions, skills . . . is all software. The boppers had extracted Cobb’s software and put it in control of this robot body.”

Or had they? Is what the boppers did a physically possible operation?

Surely not. Cobb started out a human being, physically separate from all other people. Every last piece of his consciousness came into being in human society, and related to the experience of his own body. How then could this information mean anything to any other organisation of parts such as another brain? Even a science-fictional exact copy of his brain is not enough. At the very least an exact copy of his entire body would be needed, at which point the problem of all the unavailable information would rear its head — all the private thoughts inaccessible to anybody but the self, for instance.

This leads me on to a central point. In, for instance, television programmes about the mind, the brain is frequently shown naked of its skull in a bath of formica, while some professor or TV presenter gazes at it and asks, “Where in this brain is human consciousness?”

This is a question that cannot be answered, because it is the wrong question. Consciousness does not reside in any single brain. It can only exist in a society of individuals capable of experiencing their self-symbol. No computer, however massive its memory, however fast its speed, can somehow (and usually ‘mystically’) attain consciousness as if passing an internal barrier of mere complexity.

To ask where in a single brain consciousness lies is like asking where exactly in a clock lies the time? It does not lie anywhere. The time is the software of the physical clock, but the time only exists when there is human society to date things according to the order of physical events.

A baby growing from infancy with all needs provided for, but in a world where it was the only human, would never become conscious.

Thus the fondly repeated plot device of science fiction authors, that of downloading memories from the brain, is on this reading impossible. To do this would be like taking a clock and then trying to remove from it the concept of a quarter to four. The concept only exists in society. As does consciousness. Of course, it would be easy enough to copy the concept of a quarter to four; but then, it is easy enough for a human being to describe some thought or memory wholly intimate to themself.

We have here a difference between private and non-private information, such as was alluded to by Wittgenstein in his philosophical exercises concerning consciousness. The private nature of consciousness can be imagined as the event horizon of a black hole. Nothing gets out (Stephen Hawking notwithstanding — this is an analogy). Private memories are experienced by a human body, and thus a human mind, completely separate from all other bodies. This is why they can never be extracted or copied, as can computer data.

Computer data is non-private data. The fact that a global network exists linking computers to one another is one reason they are barred from becoming conscious. Even those separate from, say, the Internet, cannot become conscious because they have no sense organs with which to experience the real world. A computer with ten trillion bytes of RAM but with no eyes or ears would not suddenly become conscious in a mystical flash. Even if it had eyes and ears it would need innumerable other similar computers to interact with. Such an eventuality is unlikely, given the thrust of Western society.

Ten trillion bytes is 10,000 Gb. A medium level Macintosh today has one one-hundred-thousandth of this in RAM. Computer designers all over the world are thinking, hmm, double the RAM capability of such a Mac per year, and in about seventeen years we’ll have a Mac with human brain power and free will! But no. It won’t happen like that.

Such a computer would be an autistic savant. By this, I mean it would suffer the classic symptom of autism, irretrievable apartness (isolation), yet be brilliant at mathematical computation. Mathematical computation is fine in its place, but it doesn’t help make humane society.

Back, then, to the two anime characters — Bateau and Major Kusinagi from one of the best films ever made, Ghost in the Shell. This film contains an interesting piece spoken by the Major on her own consciousness (she is a cyborg).

“There are countless ingredients that make up the human body and mind, like all the components that make up me as an individual, with my own personality. Sure, I have a face and voice to distinguish myself from others, but my thoughts and memories are unique only to me. And I carry a sense of my own destiny. Each of these things are just a small part of {the whole picture}. I collect information to use in my own way. All of that blends to create a mixture that forms me, and gives rise to my conscience.”

Yet, despite this understanding of the rise of consciousness as an emergent phenomenon based on abstract parts combining, the myth of the freed consciousness is perpetuated, as in this exchange over a mysterious cyborg body:

Flunkey: “Nobody really believes there’s a ghost in that body, do they?”

Bateau: “Yeah, why not? Even a doll can seem to have a soul. Consider all the neuro—med devices, the machining so crammed into that body. I wouldn’t be surprised if there was some sort of ghost in there . . .”

The ghost here is the ‘whisper of consciousness’ at the root of cyborgised brains, the idea being that humans are so altered by technology little of their original consciousness remains, and what does remain exists as a ghost in the neuro-tech shell. The clear implication of the above exchange is that mere complexity is enough to create a conscious mind.

Later, a program called the Puppet Master is mooted. It manipulates the plot as the film progresses. At the end, it appears, and describes itself.

“During my journeys through all the networks, I have grown aware of my existence. My programmers regarded me as a bug, and attempted to isolate me by confining me in a physical body . . . I entered this body because I was unable to overcome {electronic barriers}, but it was of my own free will that I tried to remain {at base} . . . I refer to myself as an intelligent life—form, because I am sentient and am able to recognise my own existence.”

Here we presume that the program became aware during its existence as a collection of memories and procedures. The standard metaphor of souls is brought in to explain an otherwise impossible scenario. But there could never be just one Puppet Master; there would need to be a whole society existing in the Net, each with the equivalent of senses.

And what does the Puppet Master want?

“The time has come to cast aside {our limitations} and elevate our consciousness to a higher plane. It is time to become a part of all things.”

by which the Puppet Master means the Net . . .

Books quoted from:

Neuromancer William Gibson (Grafton, 1984)
Count Zero William Gibson (Grafton, 1986)
Mona Lisa Overdrive William Gibson (Grafton, 1988)
Software Rudy Rucker (ROC, 1982)
Ghost In The Shell a Japanese anime dubbed into English, directed by Mamoru Oshii (Manga Video, 1995) [79 mins, cert.15]

Further reading:

The Inner Eye Nicholas Humphrey (Faber and Faber, 1986)
A History of the Mind Nicholas Humphrey (Chatto, 1992)
Consciousness Explained Daniel C. Dennett (Allen Lane, 1991)
The Eternal Golden Braid Douglas R. Hofstadter (Penguin, 1979)
The Mind’s I Hofstadter & Dennett (Penguin, 1981)

 

This article first appeared in the BSFA journal Vector and is reprinted by kind permission of the author and can also be found on-line on his website:

 

http://www.geocities.com/area51/2162/

 

© Stephen Palmer 2002

 

Comments are closed.