[Return to Home Page]
(December 1997) [Printed in "Reality Module No.1."]
PROPHETS OF THE SILICON GOD
"An incredible consequence of the laws of physics as we now understand
them is that one day in the far future superintelligent computers will
bring us back into existence and we'll never die again" -
Frank Tipler is one of many people who believe that one day it'll be possible to upload the human mind and consciousness into some super high-tech computer system.
It is perhaps the ultimate hardcore technophile dream - integration into the great AI - to be immortal with godlike powers in a totally artificial and totally reconfigurable environment. Its proponents call it "uploading."
But - is it credible?
I'm not sure how old the idea of uploading is. "The Encyclopedia of Science Fiction" (1991 ed.) has no entry on uploading. The earliest example I can think of is in the movie "TRON" (1982) - where the protagonist is converted into data and becomes an analogue of himself in the electronic world of the mainframe computer. Uploading is also an element in the archetypal cyberpunk novel "Neuromancer" (1984) by William Gibson.
Uploading is very different from Virtual Reality! VR is just a very clever and sophisticated computer interface. In VR your body and mind remain here - and your special headgear, gloves or bodysuit enable you to interact with the computer interface in a very sensation-rich way. (Your mind is concentrating on a cyberspace reality but you are no more inside that reality than you are on a raceway when you are fiddling with a joystick in some computer racing-car game. It can just seem that way.)
There are far too many novels (and films) which confuse a high quality VR experience with an actual experience in some other world. I can go swimming in a VR lagoon but I won't get physically wet. I could get blown to bits by a blaster on some VR alien moon, but though shaken I remain totally unharmed in the outside world.
VR is a high-quality computer-graphics illusion which resembles a real place - but it is not a real place.
(A good novel I read recently which doesn't confuse VR with uploading is Neal Stephenson's "Snow Crash" (1992).)
When you are "uploaded" it doesn't just seem like you're in a computer-generated space, you actually are in a computer-generated space!
One of the concepts I found most fascinating in Greg Egan's novel "Quarantine" (1992) was its descriptions of neural modifications (MODs) - nanotech brain implants. (A concept also included in a slightly different form in "Johnny Mnemonic.") This nanotech circuitry can provide mundane features like video telecommunications and computer-interfacing, or allow you precise control over your own emotional state.
I have no trouble with this concept - though I remain uncertain of how it can be implemented.
There can be no "generic" neural implants (except maybe for simple features like rapid calculation) - because everyone's brain is structured differently. People will have to "train" their implants much like people with prosthetic limbs today have to train the electronics to respond correctly to nerve-impulse triggers.
The bionic ear is an early example of augmented wetware.
I have no trouble believing in the concepts of this technology. The real question is - how far can it really go?
It is easy to imagine our electronic gismos miniaturised and able to be operated without the need of fingers. (After all - computers can already be controlled through "piped-in" brainwaves.) You can have a radio, TV, videophone and PC built into your skull. (I'll say nothing about whether such technology is desirable or whether it will be socially sanctioned - only that it is plausible.)
But - and it is a HUGE but: How many genuine human brain functions can be mapped onto silicon 1 and made use of there? Can memories be stored, can images in the mind or from dreams be stored, can reasoning and higher brain functions be stored and made use of, can consciousness be stored in a computer?
(You can imagine somebody replacing chunks of their grey matter with computer analogues - until such time as their wetware is all silicon or glass or plastic or metal.)
My intuition is that it is not possible. The brain is holistic with respect to its higher functions, and you run up against the nature of AI, modelling, and the mystery of consciousness.
1 "Silicon" is used here generically to refer to any material which might be used to manufacture computers in the future: silicon, gallium arsenide, glass and micro-lasers, plastic and biopolymers, synthetic protein gels, nanobot assemblies, or whatever.
One of my favourite thought experiments is John Searle's Chinese Room. (You are probably familiar with it but I'll provide a description anyway.)
John imagines himself to be alone in a sealed room. There are two narrow slots at opposite ends of the room. There is a table, a book, and a stack of cards with squiggles on them. (The squiggles are Chinese characters. John knows nothing of the Chinese language.)
At intervals cards with squiggles on them are passed through one of the slots. John collects the cards and takes them across to the table and arranges them. He looks for the squiggles in the book.
The book is written in English and contains rules linking groups of characters. It tells him, for example, that when he receives cards through the slot with "squiggle", "squiggle", and "zip" printed on them - he has to select from his pile the cards with "blot", "zipple" and "thog" on them and push them out through the other slot.
Here's the rub - to someone outside of the room who understands Chinese - there's a meaningful relationship between what goes in one slot and what comes out of the other. But John inside of the room does not know the meaning of any of the Chinese characters. He is mechanically following rules and procedures outlined in his book, his actions are to him without meaning.
John is behaving like a computer program!
The input is meaningless to the machine, but is manipulated according to a predefined set of procedures (the program - or the rule book) to produce an output. Programs can be fiendishly complicated with millions of lines of code and elaborate nestled subroutines, but they are still mechanical procedures. The computer, any computer, lacks awareness - consciousness if you will.
I've read several books about artificial intelligence - and the overall impression I have gained is that it's more "artificial" than intelligent. It is a simulation of intelligence, and does produce on occasion surprises, but it is not the real thing.
[[I'll step aside a bit and mention neural networks and expert systems. Neural networks (which can be simulated on conventional hardware) can be trained to be very good at recognising patterns (faces for example) and you can build up quite sophisticated semantic networks within them. But - and this is important - all the training, all the conscious decision comes from outside - from people. The neural network will work with the datasets we give it - but we have to provide the datasets, the neural network lacking consciousness cannot decide what it wants to do!]]
[Expert systems are excellent data-filters in highly specific areas. They can be brilliant at diagnosing blood diseases for example. But their processing is still mechanical - following rules, setting up decision tables, weighing probabilities. They cannot make decisions about unknown diseases, or decide that they would rather take up astrology!]
Doubtlessly AI systems will become more and more sophisticated and will encompass more and more data in their neural nets. In time they may even appear to be genuinely intelligent.
But it will only be a damn-good simulation - not the real thing. Deep down they will still be mechanically-processing an empty program.
A new term may need to be invented for this pseudo-intelligence. Something like High Level Machine-Intelligence.
"From the point of view of science a perfect copy is no longer a copy it
is the original" -
This is important. How good can a simulation be before it becomes indistinguishable from the real thing? Can it ever become the real thing?
The short answer is "no!" A simulation will always be a computer model. The equations will be refined, more and more real world factors will be integrated into the simulation, but it will always be an imaginary artefact. Never confuse the model with the real thing. The information about something is not the same as the something.
(Of course real things can be constructed by unorthodox technological means. A genetically-engineered blue rose is a real rose. A wall composed of airborne nanobots is a real wall.)
Greg Egan's "Permutation City" (1994) has been described by Damien Broderick as a "brilliant upload novel."
In this book Greg's cyberspace citizens are computer models of people derived from extreme-high-resolution medical scanning techniques. The computer models the internal organs including the brain's neurons - and then initiates the mathematical modelling of biochemical processes and the firing of neurons.
(Greg Egan avoids the absurd action of having a real human-being "magically" converted to a digital form. His "copies" are computer simulations of people and quite separate from their genuine human analogues. For this we can be grateful.)
Frank Tipler's quote about a perfect copy being the original is only true (in a sense) for simple objects like electrons - where each is interchangeable (assuming equivalent spins) with any other. People are a different matter. This brings me to the last main section.
So we'll say we've got this high-resolution model of a human brain's neural network stored in a next-century's extreme-high-tech computer system. (If you can model small neural networks now - surely you'll be able to model great big humungous ones with the vast data storage and processing power that'll come our way.)
Now - how do we make this brain think?
There's a tacit assumption among many scientists that if a system is of sufficient complexity, then highly-organised behaviour will somehow spontaneously come into being. It is analogous to the assumption that a soup of organic molecules will somehow over time assemble into organised macromolecules, then self-assembling organised macromolecules, then primitive protocells, then prokaryotic cells - and ultimately to become you, me and the cat and the dog.
Having studied biochemistry I can tell you that even the simplest of cells are highly-complex things. The evolutionary jump from self-assembling macromolecules to them is huge! The message here is that there are factors at work here which we haven't fathomed yet. The origin of life is still a great enigma. (Viruses are a problem - they're dead simple biochemically, but they can do sod-all without a complex cell with protein-engineering machinery to commandeer.)
Now this is important. There is a tacit belief among many computer scientists that an artificial intelligence of sufficient complexity will somehow develop or cross-over into genuine intelligence. There is also a tacit belief that a sufficiently-complex computer system or network could somehow develop awareness, become conscious. Many SF stories and even some books on the future of information technology are based on these assumptions.
These assumptions are, in my opinion, very likely to be false. I don't believe that intelligence and consciousness arise spontaneously through complexity in information-processing systems. (Of course the only way to prove me wrong is to build a system of great complexity which somehow does develop active intelligence and awareness. But the whole thing is a matter of faith - a belief in the "magical properties" of complex systems. If a scientist finds that the new complex computer system isn't really intelligent - they'll just assume that next year's even-more-complex model will have an even better chance of crossing the magic threshold.)
I believe the assumption of spontaneous consciousness through increased complexity is false because genuine intelligence and conscious awareness are different in kind from what has gone before. There is no clear spectrum of conditions between consciousness and the state of being a mere automaton - between machine and being. (But we can ask "how conscious are insects?" Do they have moments of awareness in a life of responding to stimuli and instincts through preprogrammed behaviour patterns?)
What is the difference between an elegant machine intelligence and a genuine being with an active intelligence and full consciousness? Two things - directed purpose and awareness of self! An intelligent being doesn't just solve problems - it solves problems for a reason! A conscious being is aware that they exist and behaves in a deliberate manner.
Back to Greg Egan's computer simulated human brain in a simulated human body. The initial copy resembles a newly dead corpse - inert. The computer can simulate biochemical reactions to simulate a living body - but a living brain is more complex. Random firing of synapses will not do. How would you "animate" the simulated brain? You can model processing of sensory input and biochemical feedback - this will give you a simulated human vegetable. (Not what we are after. We want an actively-intelligent human simulation.) How do you simulate the active processes of memory, the operation of higher reasoning, emotional responses? How do you model a personality? How do you cope with daydreams or fluttering thoughts? And even if our knowledge of brain function allows us to simulate these things it still won't be enough!
We come to the modelling limitation - no matter how good the simulation is it won't be the real thing. You can have a model of your body with a simulated brain functioning away - but it will not be you!
(Even an identical clone of you won't be you, and a computer simulation is less real than a clone.)
People resemble people much more closely than computer simulations will ever resemble people - and you can't channel your consciousness, your active probing intelligence into somebody else now - can you?
(I can ask "where does your conscious awareness come from?" If you think about it even for a little while - it takes on an eerie, unsettling quality.)
[If you believe in a soul you can believe in possession - one soul stealing a body/brain from another. And souls would have a much greater affinity for flesh and blood than for lifeless silicon circuitry.]
I can only conclude that the transfer of human consciousness into cyberspace is a fantasy. We will not be waking up in billions of years time to find that we have become data-beings in some vast ever-changing sentient computer's dreaming.
When I copied out Frank Tipler's quotation at the beginning of this essay I couldn't fail to notice the religious overtones.
The hardcore technophiles have imagined a silicon god to replace the old gods with human forms.
"Uploading" isn't science - theoretical or otherwise. It is a new form of religion. It lacks moral structures and makes a god out of a mere machine. Because of this it is dangerous - it gazes on an imaginary technological utopia and has nothing of consequence to say about what we really are.
Science and technology aren't the keys for achieving human happiness. People are! (But we can use science and technology to help us if we want to!)
SELECTED BIBLIOGRAPHY (Books that helped me understand the subject matter):
BODEN, MARGARET A. "The Creative Mind : Myths & Mechanisms."
BRODERICK, DAMIEN. "The Spike : Accelerating Into The Unimaginable
JONES, M. "BASIC Artificial Intelligence."
PATEMAN, TREVOR. "What Is Philosophy?"
RHEINGOLD, HOWARD. "Virtual Reality."
ROZNAK, THEODORE. "The Cult of Information."
WILLIAMS, NOEL. "The Intelligent Micro : Artificial Intelligence For
- and of course the novels mentioned in the text!
Copyright © 1997 by Michael F. Green. All rights reserved.
Last Updated: 11 September 2004