| | | | | |       Listen to Me       | | | | | |

The Kurzweil Applied Intelligence Alumni Newsletter


Go to: Welcome Table of Contents What's New Registration Database

New York Times Cover Kurzweil Alumni News article on Ray's New Book

New York Times Review of Ray's New Book
January 3, 1999
Hello, HAL
Three books examine the future of artificial intelligence and find the human brain is in trouble.

By COLIN McGINN

Has the invasion already begun? Are the aliens already right under our noses? Are machines, the products of human engineering intelligence, poised to take over the world -- or is this an irrational fear, the latest spasm of the Luddite spirit? Finally, is the whole idea just a clever marketing ploy for the investment-hungry artificial intelligence industry? Here we have three books, all written by experts in computer intelligence, aimed to persuade us that the Age of Machines is nigh. We are to be eclipsed by our own technology, ceding our outdated flesh, blood and neural tissue to integrated circuits and their mechanistic progeny. The future belongs to the robots.

The roots of this dystopian vision (or utopian, depending on your view) go back to a prediction made in the mid-1960's by a former chairman of Intel, Gordon Moore, that the size of each transistor on an integrated circuit will be reduced by 50 percent every 24 months. This prediction, now grandly known as Moore's law, implies the exponentially expanding power of circuit-based computation over time. A rough corollary is that you will get double the computational power for the same price at two-year intervals. Thus computers today can perform millions more computations per second than equivalently priced computers of only a few decades ago. It is further predicted that new computer technologies will take over where integrated circuits leave off and continue the inexorable march toward exponentially increasing computational power. The computational capacity of the human brain is only a few decades away from being duplicated on an affordable computing machine. Brains are about to be outpaced by one of their products. They are already being outdone in certain areas: speed of calculation, data storage, theorem-proving, chess.

All three of these books provide a vivid window on the state of the art in artificial intelligence research, and offer provocative speculations on where we might be heading as the information age advances. Of the three, ''The Age of Spiritual Machines,'' by Ray Kurzweil, is the best: it is more detailed, thoughtful, clearly explained and attractively written than ''Robot: Mere Machine to Transcendent Mind,'' by Hans Moravec, and ''When Things Start to Think,'' by Neil Gershenfeld -- though all three are creditable efforts at popularization.

Since the books cover much the same ground, with some difference of emphasis, Kurzweil's gives you the most bits for your buck. Gershenfeld's breezily chatty book sometimes reads too much like an advertisement for the Media Lab at M.I.T., of which he is director. There is much discussion (and not a little hype) of his many achievements in harnessing computer technology to more physical concerns: electronic books, smart shoes, wearable computers, technologically enhanced cellos.

Moravec's book is more intellectually adventurous and free with confident futuristic speculation. He envisages autonomous robot-run industries that we tax to siphon off their wealth, and the gradual replacement of organic humans with mechanical descendants -- our ''mind children.'' His vision is of a world in which machines are the next evolutionary step, with organic tissue but a blink in the eye of cosmic history. Once intelligence is created by natural selection it will be only a matter of time (a very short one by cosmic standards) before the products of intelligence outshine their creators, finally displacing them altogether. This is good knockabout stuff, a heady and unnerving glimpse into a possible future. Where Moravec is weak is in attempts at philosophical discussion of machine consciousness and the nature of mind. He writes bizarre, confused, incomprehensible things about consciousness as an abstraction, like number, and as a mere ''interpretation'' of brain activity. He also loses his grip on the distinction between virtual and real reality as his speculations spiral majestically into incoherence.

Kurzweil is more philosophically sensitive, and hence cautious, in his claims for computer consciousness; he develops the same kinds of speculations as Moravec, but with more of an emphasis on the meaning of such innovations for human life. He has an engaging discussion of the future of virtual sex once the technology includes realistic haptic simulations (what other bodies feel like to touch); here he envisages the eventual triumph of the virtual over the real. His book ranges widely over such juicy topics as entropy, chaos, the big bang, quantum theory, DNA computers, quantum computers, Godel's theorem, neural nets, genetic algorithms, nanoengineering, the Turing test, brain scanning, the slowness of neurons, chess playing programs, the Internet -- the whole world of information technology past, present and future. This is a book for computer enthusiasts, science fiction writers in search of cutting-edge themes and anyone who wonders where human technology is going next.

But the question must be asked: How seriously are we to take all this breathless compuhype? Will the 21st century really see machines acquire mentality?

There is naturally a lot of talk in these books about the possibility of machines duplicating the operations of the human mind. But it is vital to distinguish two questions, which are often run together by our authors: Can machines duplicate the external intelligent behavior of humans? And can machines duplicate the inner subjective experience of people? Call these the questions of outside and inside duplication. What is known as the Turing test says in effect that if a machine can mimic the outside of a human then it has thereby replicated the inside: if it behaves like a human with a mind, it has a mind. All three authors are partial to the Turing test, thus equating the simulation of external manifestations of mind with the reality of mind itself. However, the Turing test is seriously flawed as a criterion of mentality.

First, it is just an application of the doctrine of behaviorism, the view that minds reduce to bodily motions; and behaviorism has long since been abandoned, even by psychologists. Behavior is just the evidence for mind in others, not its very nature. This is why you can act as if you are in pain and not really be in pain -- you are just pretending.

Second, there is the kind of problem highlighted by the philosopher John Searle in his ''Chinese Room'' argument: computer programs work merely by the manipulation of symbols without any reference to what these symbols might mean, so that it would be possible for a human to follow such a program for a language he has no understanding of. The computer is like my manipulating sentences of Chinese according to formal rules and yet having no understanding of the Chinese language. It follows that mimicking the externals of human understanding by means of a symbol-crunching computer program is not devising a machine that itself understands. None of our authors even so much as consider this well-known and actually quite devastating argument.

Third, to know whether we can construct a machine that is conscious we need to know what makes us conscious, for only then can we determine whether the actual basis of consciousness can occur in an inorganic system. But we simply don't know what makes organic brains conscious; we don't know what properties of neurons are responsible for the emergence of subjectivity. We would need to solve the age-old mind-body problem before we could sensibly raise the question of minds in machines. My hunch is that it is something about specifically organic tissue that is responsible for consciousness, since this seems to be the way nature has chosen to engineer consciousness; but that can only be a guess in view of our deep ignorance of the roots of consciousness in the brain. In any case, lacking insight into the basis of consciousness, it is futile to ask whether a machine could have what it takes to generate consciousness.

Passing the Turing test is therefore no proof of machine consciousness: outside duplication does not guarantee inside duplication. This bears strongly on a practical suggestion of Kurzweil -- that during the course of the 21st century we might decide to ''upload'' ourselves into a suitable computing machine as a way of extending our lives and acquiring a more robust physical constitution. Let us suppose that the machine you choose to upload into passes the Turing test; it had better or else you would not wish to inhabit it. The problem is that it might do so without containing the potential for any form of consciousness, so that uploading your mind into it amounts to letting your mind evaporate into thin air. You will pass from sentient being to insentient robot.

That is a lot to risk on the veracity of the Turing test! And it is no good hoping that the robots themselves will tell you whether they are conscious, since they will say they are -- whether or not they are. If people become convinced of the validity of the Turing test on mistaken philosophical grounds, then we might find ourselves in the position of unknowingly extinguishing our consciousness by uploading into machines that are inherently incapable of feeling anything. If Kurzweil is right when he says that machines that mimic the externals of human performance will become available sometime during the next century, then I suggest that the human race ponder the merits of the Turing test very carefully before taking any drastic steps. I for one would prefer sentient mortality to insentient immortality, or, more accurately, to the end of my self and the creation of an unconscious machine that merely behaves like me.

Kurzweil, Moravec and Gershenfeld take it as a given that the mind is essentially a computer. The question then is just how powerful a computer the mind is and whether a machine could duplicate this power. But the authors do not think hard enough about their basic assumption. It is true that human minds manipulate symbols and engage in mental computations, as when doing arithmetic. But it does not follow from this that computing is the essence of mind; maybe computing is just one aspect of the nature of mind. And isn't this already obvious from the fact that many nonmental systems engage in computations? Silicon chips are not conscious, nor are the components of any future molecular or quantum computer. The fact is that minds are just one kind of computational system among many, not all of which have any trace of mentality in them. So computation cannot be definitive of mind.

One aspect of mind wholly omitted by the computational conception is the phenomenological features of experience -- the specific way a rose smells, for instance. This is something over and above any rose-related computations a machine might perform. A DNA computer has biochemical as well as computational properties; a conscious mind has phenomenological as well as computational properties. These phenomenological properties have a stronger claim to being distinctive of the mind than mere computational ones. There is thus no reductive explanation of the mental in terms of the computational; we cannot regard consciousness as nothing but a volley of physically implemented symbol manipulations. And this means that there is no reason at all to believe that building ever larger and faster computers will take us one jot closer to building a genuinely mental machine. The fallacy here is analogous to reasoning that if a human body is a device for taking you from A to B, and a car also does this, then a human body is the same thing as a car. Minds compute and so do silicon chips, but that is no reason to suppose that minds are nothing more than what they have in common with silicon chips (any more than silicon chips are nothing more than what they have in common with minds).

If our three authors are wobbly on the philosophy of mind and artificial intelligence, they are strong on computer technology itself; and here is where their books are particularly interesting. The reader can simply detach all the dubious speculations about machine consciousness and focus on the authors' predictions about the future of computer and robot technology, its potential benefits and hazards. Consider two examples of the kind of technology that might well be just over the horizon: the foglets and the nanobots. Foglets are tiny, cell-sized robots, each more computationally powerful than the human brain, that are equipped with minute gripping arms that enable them to join together into diverse physical structures. At ease the foglets are just a loose swarm of suspended particles in the air, but when you press a button they execute a program for forming themselves into an object of your choosing. We may come to live in foglet houses whose rooms are formed from the same foggy swarm. We may come to have foglet friends and take foglet vacations. Our entire physical environment may come to consist of a 3-D mosaic of cooperating microscopic computers. This would be virtual reality made concrete.

Nanobots are devices for nanoengineering, the manipulation of matter on the atomic scale. They are also high-power microcomputers, equipped with manipulative skills and an urge to perpetuate their kind. They can make copies of themselves by following a program for nano-scale operations on chunks of surrounding matter. Imagine you start with 10 of them and that they can each make a copy of themselves in five seconds (they can do many millions of computations a second and their little mechanical limbs move, insectlike, with great rapidity). That means they double their numbers every five seconds, and an exponential nanobot population explosion is set to break out. These little blighters could consume the entire planet in a matter of weeks, including all the organic material on it! Nor would they be picked off by natural predators, being quite indigestible. In a very short time the nanobots will have razed everything in sight.

Self-replication is perhaps the biggest hazard presented by advanced computer technology. Even today computers are routinely used to design other computers; in the next century they may be making computers that challenge humans in all sorts of ways. Victor Frankenstein refused to give his monstrous creation a bride for fear of their reproductive potential. Maybe we should be thinking hard now about the replicative powers of intelligent machines. If the 20th century was the century of nuclear weapons, then the 21st might be the century of self-breeding aliens of our own devising.

Colin McGinn, a professor of philosophy at Rutgers University, is the author of ''Ethics, Evil and Fiction'' and ''The Mysterious Flame: Conscious Minds in a Material World,'' to be published this spring.

© Copyright 1998 The New York Times Company


| | | | | | Kurzweil AI Alumni News | | | | | |
Go to: Welcome Table of Contents What's New Registration Database

Questions or Problems? Send e-mail
January 2, 1999