Turing also contributed to the foundation of mathematical logic. In 1931 the Viennese mathematician Kurt G.o.del shocked the world of mathematics by proving that there are true statements in arithmetic that can never be proven within the axioms of arithmetic. (For example, the Goldbach conjecture of 1742 [that any even integer greater than two can be written as the sum of two prime numbers] is still unproven after over two and a half centuries, and may in fact be unprovable.) G.o.del"s revelation shattered the two-thousand-year-old dream, dating back to the Greeks, of proving all true statements in mathematics. G.o.del showed that there will always be true statements in mathematics that are just beyond our reach. Mathematics, far from being the complete and perfect edifice dreamed of by the Greeks, was shown to be incomplete.
Turing added to this revolution by showing that it was impossible to know in general whether a Turing machine would take an infinite amount of time to perform certain mathematical operations. But if a computer takes an infinite amount of time to compute something, it means that whatever you"re asking the computer to compute is not computable. Thus Turing proved that there were true statements in mathematics that are incomputable, that is, forever beyond the reach of computers, no matter how powerful.
During World War II, Turing"s pioneering work on code breaking arguably saved the lives of thousands of Allied troops and influenced the outcome of the war. The Allies were unable to decode the secret n.a.z.i code encrypted by a machine called the Enigma, so Turing and his colleagues were asked to build a machine that would break the n.a.z.i code. Turing"s machine was called the "bombe" and was ultimately successful. Over two hundred of his machines were in operation by the end of the war. As a result the Allies could read secret n.a.z.i transmissions and hence fool the n.a.z.is about the date and place of the final invasion of Germany. Historians have since debated precisely how pivotal Turing"s work was in the planning of the invasion of Normandy, which finally led to Germany"s defeat. (After the war, Turing"s work was cla.s.sified by the British government; as a result, his pivotal contributions were unknown to the public.) Instead of being hailed as a war hero who helped turn the tide of World War II, Turing was hounded to death. One day his home was burglarized, and he called the police. Unfortunately, the police found evidence of his h.o.m.os.e.xuality and arrested him. Turing was then ordered by the court to be injected with s.e.x hormones, which had a disastrous effect, causing him to grow b.r.e.a.s.t.s and causing him great mental anguish. He committed suicide in 1954 by eating an apple laced with cyanide. (According to one rumor, the logo of the Apple Corporation, an apple with a bite taken out of it, pays homage to Turing.) Today, Turing is probably best known for his "Turing test." Tired of all the fruitless, endless philosophical discussion about whether machines can "think" and whether they have a "soul," he tried to introduce rigor and precision into discussions about artificial intelligence by devising a concrete test. Place a human and a machine in two sealed boxes, he suggested. You are allowed to address questions to each box. If you are unable to tell the difference between the responses of the human and the machine, then the machine has pa.s.sed the "Turing test."
Simple computer programs have been written by scientists, such as ELIZA, that can mimic conversational speech and hence fool most unsuspecting people into believing they are speaking to a human. (Most human conversations, for example, use only a few hundred words and concentrate on a handful of topics.) But so far no computer program has been written that can fool people who are specifically trying to determine which box contains the human and which contains the machine. (Turing himself conjectured that by the year 2000, given the exponential growth of computer power, a machine could be built that would fool 30 percent of the judges in a five-minute test.) A small army of philosophers and theologians has declared that it is impossible to create true robots that can think like us. John Searle, a philosopher at the University of California at Berkeley, proposed the "Chinese room test" to prove that AI is not possible. In essence, Searle argues that while robots may be able to pa.s.s certain forms of the Turing test, they can do so only because they blindly manipulate symbols without the slightest understanding of what they mean.
Imagine that you are sitting inside the box and you don"t understand a word of Chinese. a.s.sume you have a book that allows you to rapidly translate Chinese and manipulate its characters. If a person asks you a question in Chinese, you merely manipulate these strange-looking characters, without understanding what they mean, and give credible answers.
The essence of his criticism boils down to the difference between syntax and semantics. Robots can master the syntax of a language (e.g., manipulating its grammar, its formal structure, etc.) but not its true semantics (e.g., what the words mean). Robots can manipulate words without understanding what they mean. (This is somewhat similar to talking on the phone to an automatic voice message machine, where you have to punch in "one," "two," etc., for each response. The voice at the other end is perfectly capable of digesting your numerical responses, but is totally lacking in any understanding.) Physicist Roger Penrose of Oxford, too, believes that artificial intelligence is impossible; mechanical beings that can think and possess human consciousness are impossible according to the laws of the quantum theory. The human brain, he claims, is so far beyond any possible creation of the laboratory that creating humanlike robots is an experiment that is doomed to fail. (He argues that in the same way that G.o.del"s incompleteness theorem proved that arithmetic is incomplete, the Heisenberg uncertainty principle will prove that machines are incapable of human thought.) Many physicists and engineers, however, believe that there is nothing in the laws of physics that would prevent the creation of a true robot. For example, Claude Shannon, often called the father of information theory, was once asked the question "Can machines think?" His reply was "Sure." When he was asked to clarify that comment, he said, "I think, don"t I?" In other words, it was obvious to him that machines can think because humans are machines (albeit ones made of wetware rather than hardware).
Because we see robots depicted in the movies, we may think the development of sophisticated robots with artificial intelligence is just around the corner. The reality is much different. When you see a robot act like a human, usually there is a trick involved, that is, a man hidden in the shadows who talks through the robot via a microphone, like the Wizard in The Wizard of Oz. In fact, our most advanced robots, such as the robot rovers on the planet Mars, have the intelligence of an insect. At MIT"s famed Artificial Intelligence Laboratory, experimental robots have difficulty duplicating feats that even c.o.c.kroaches can perform, such as maneuvering around a room full of furniture, finding hiding places, and recognizing danger. No robot on Earth can understand a simple children"s story that is read to it.
In the movie 2001: A s.p.a.ce Odyssey, it was incorrectly a.s.sumed that by 2001 we would have HAL, the super-robot that can pilot a s.p.a.ceship to Jupiter, chat with crew members, repair problems, and act almost human.
THE TOP-DOWN APPROACH.
There are at least two major problems scientists have been facing for decades that have impeded their efforts to create robots: pattern recognition and common sense. Robots can see much better than we can, but they don"t understand what they see. Robots can also hear much better than we can, but they don"t understand what they hear.
To attack these twin problems, researchers have tried to use the "top-down approach" to artificial intelligence (sometimes called the "formalist" school or GOFAI, for "good old-fashioned AI"). Their goal, roughly speaking, has been to program all the rules of pattern recognition and common sense on a single CD. By inserting this CD into a computer, they believe, the computer would suddenly become self-aware and attain humanlike intelligence. In the 1950s and 1960s great progress was made in this direction, with the creation of robots that could play checkers and chess, do algebra, pick up blocks, and so forth. Progress was so spectacular that predictions were made that in a few years robots would surpa.s.s humans in intelligence.
At the Stanford Research Inst.i.tute in 1969, for example, the robot SHAKEY created a media sensation. SHAKEY was a small PDP computer placed above a set of wheels with a camera on top. The camera was able to survey a room, and the computer would a.n.a.lyze and identify the objects in that room and try to navigate around them. SHAKEY was the first mechanical automaton that could navigate in the "real world," prompting journalists to speculate about when robots would leave humans in the dust.
But the shortcomings of such robots soon became obvious. The top-down approach to artificial intelligence resulted in huge, clumsy robots that took hours to navigate across a special room that contained only objects with straight lines, that is, squares and triangles. If you placed irregularly shaped furniture in the room the robot would be powerless to recognize it. (Ironically, a fruit fly, with a brain containing only about 250,000 neurons and a fraction of the computing power of these robots, can effortlessly navigate in three dimensions, executing dazzling loop-the-loop maneuvers, while these lumbering robots get lost in two dimensions.) The top-down approach soon hit a brick wall. Steve Grand, director of the Cyberlife Inst.i.tute, says that approaches like this "had fifty years to prove themselves and haven"t exactly lived up to their promise."
In the 1960s scientists did not fully appreciate the enormity of the work involved in programming robots to accomplish even simple tasks, such as programming a robot to identify objects such as keys, shoes, and cups. As Rodney Brooks of MIT said, "Forty years ago the Artificial Intelligence Laboratory at MIT appointed an undergraduate to solve it over the summer. He failed, and I failed on the same problem in my 1981 Ph.D. thesis." In fact, AI researchers still cannot solve this problem.
For example, when we enter a room, we immediately recognize the floor, chairs, furniture, tables, and so forth. But when a robot scans a room it sees nothing but a vast collection of straight and curved lines, which it converts to pixels. It takes an enormous amount of computer time to make sense out of this jumble of lines. It might take us a fraction of a second to recognize a table, but a computer sees only a collection of circles, ovals, spirals, straight lines, curly lines, corners, and so forth. After an enormous amount of computing time, a robot might finally recognize the object as a table. But if you rotate the image, the computer has to start all over again. In other words, robots can see, and in fact they can see much better than humans, but they don"t understand what they are seeing. Upon entering a room, a robot would see only a jumble of lines and curves, not chairs, tables, and lamps.
Our brain unconsciously recognizes objects by performing trillions upon trillions of calculations when we walk into a room-an activity that we are blissfully unaware of. The reason that we are unaware of all our brain is doing is evolution. If we were alone in the forest with a charging saber-toothed tiger, we would be paralyzed if we were aware of all the computations necessary to recognize the danger and escape. For the sake of survival, all we need to know is how to run. When we lived in the jungle, it simply was not necessary for us to be aware of all of the ins and outs of our brain"s recognizing the ground, the sky, the trees, the rocks, and so forth.
In other words, the way our brain works can be compared to a huge iceberg. We are aware of only the tip of the iceberg, the conscious mind. But lurking below the surface, hidden from view, is a much larger object, the unconscious mind, which consumes vast amounts of the brain"s "computer power" to understand simple things surrounding it, such as figuring out where you are, whom you are talking to, and what lies around you. All this is done automatically without our permission or knowledge.
This is the reason that robots cannot navigate across a room, read handwriting, drive trucks and cars, pick up garbage, and so forth. The U.S. military has spent hundreds of millions of dollars trying to develop mechanical soldiers and intelligent trucks, without success.
Scientists began to realize that playing chess or multiplying huge numbers required only a tiny, narrow sliver of human intelligence. When the IBM computer Deep Blue beat world chess champion Garry Kasparov in a six-game match in 1997, it was a victory of raw computer power, but the experiment told us nothing about intelligence or consciousness, although the game made plenty of headlines. As Douglas Hofstadter, a computer scientist at Indiana University, said, "My G.o.d, I used to think chess required thought. Now, I realize it doesn"t. It doesn"t mean Kasparov isn"t a deep thinker, just that you can bypa.s.s deep thinking in playing chess, the way you can fly without flapping your wings."
(Developments in computers will also have an enormous impact on the future of the job market. Futurists sometimes speculate that the only people who will have jobs decades into the future will be highly skilled computer scientists and technicians. But actually workers such as sanitation men, construction workers, firemen, police, and so forth, will also have jobs in the future because what they do involves pattern recognition. Every crime, piece of garbage, tool, and fire is different and hence cannot be managed by robots. Ironically, college-educated workers, such as low-level accountants, brokers, and tellers, may lose their jobs in the future since their work is semirepet.i.tive and involves keeping track of numbers, a task that computers excel at.) In addition to pattern recognition, the second problem with the development of robots is even more fundamental, and that is their lack of "common sense." Humans know, for example, * Water is wet.
* Mothers are older than their daughters.
* Animals do not like pain.
* You don"t come back after you die.
* Strings can pull, but not push.
* Sticks can push, but cannot pull.
* Time does not run backward.
But there is no line of calculus or mathematics that can express these truths. We know all of this because we have seen animals, water, and strings, and we have figured out the truth by ourselves. Children learn common sense by b.u.mping into reality. The intuitive laws of biology and physics are learned the hard way, by interacting with the real world. But robots haven"t experienced this. They know only what has been programmed into them beforehand.
(As a result, the jobs of the future will also include those that require common sense, that is, artistic creativity, originality, acting talent, humor, entertainment, a.n.a.lysis, and leadership. These are precisely the qualities that make us uniquely human and that computers have difficulty duplicating.) In the past, mathematicians have tried to mount a crash program that could ama.s.s all the laws of common sense once and for all. The most ambitious attempt is CYC (short for encyclopedia), the brainchild of Douglas Lenat, the head of Cycorp. Like the Manhattan Project, the $2 billion crash program that built the atomic bomb, CYC was to be the "Manhattan Project" of artificial intelligence, the final push that would achieve true artificial intelligence.
Not surprisingly, Lenat"s motto is, Intelligence is 10 million rules. (Lenat has a novel way in which to find new laws of common sense; he has his staff read the pages of scandalous tabloids and lurid gossip rags. Then he asks CYC if it can spot the errors in the tabloids. Actually, if Lenat succeeds in this, CYC may actually be more intelligent than most tabloid readers!) One of the goals of CYC is to attain "breakeven," that is, the point at which a robot will be able to understand enough so that it can digest new information on its own simply by reading magazines and books found in any library. At that point, like a baby bird leaving the nest, CYC will be able to flap its wings and take off on its own.
But since the firm"s founding in 1984, its credibility has suffered from a common problem in AI: making predictions that generate headlines but are wildly unrealistic. Lenat predicted that in ten years, by 1994, CYC would contain 30 to 50 percent of "consensus reality." Today CYC is not even close. As the scientists of Cycorp have found out, millions and millions of lines of code need to be programmed in order for a computer to approximate the common sense of a four-year-old child. So far the latest version of the CYC program contains only a paltry 47,000 concepts and 306,000 facts. Despite Cycorp"s regularly optimistic press releases, one of Lenat"s coworkers, R. V. Guha, who left the team in 1994, was quoted as saying, "CYC is generally viewed as a failed project.... We were killing ourselves trying to create a pale shadow of what had been promised."
In other words, attempts to program all the laws of common sense into a single computer have floundered, simply because there are so many laws of common sense. Humans learn these laws effortlessly because we tediously continue to b.u.mp into the environment throughout our lives, quietly a.s.similating the laws of physics and biology, but robots do not.
Microsoft founder Bill Gates admits, "It has been much harder than expected to enable computers and robots to sense their surrounding environment and to react quickly and accurately...for example, the abilities to orient themselves with respect to the objects in a room, to respond to sounds and interpret speech, and to grasp objects of varying sizes, textures, and fragility. Even something as simple as telling the difference between an open door and a window can be devilishly tricky for a robot."
Proponents of the top-down approach to artificial intelligence, however, point out that progress in this direction, although at times glacial, is happening in labs around the world. For example, for the past few years the Defense Advanced Research Projects Agency (DARPA), which often funds state-of-the-art technology projects, has sponsored a $2 million prize for the creation of a driverless vehicle that can navigate by itself around a rugged terrain in the Mojave Desert. In 2004 not a single entry in the DARPA Grand Challenge could finish the race. In fact the top car managed to travel 7.4 miles before breaking down. But in 2005 the Stanford Racing Team"s driverless car successfully navigated the grueling 132-mile course (although it took the car seven hours to do so). Four other cars also completed the race. (Some critics noted that the rules permitted the cars to use GPS navigation systems along a long deserted path; in effect, the cars could follow a predetermined road map without many obstructions, so the cars never had to recognize complex obstacles in their path. In real driving, cars have to navigate unpredictably around other cars, pedestrians, construction sites, traffic jams, and so forth.) Bill Gates is cautiously optimistic that robotic machines may be the "next big thing." He likens the field of robotics now to the personal computer field he helped to start thirty years ago. Like the PC, it may be poised to take off. "No one can say with any certainty when-or if-this industry will achieve critical ma.s.s," he writes. "If it does, though, it may well change the world."
(Once robots with humanlike intelligence become commercially available, there will be a huge market for them. Although true robots do not exist today, preprogrammed robots do exist and have proliferated. The International Federation of Robotics estimates that there were about 2 million of these personal robots in 2004, and that another 7 million would be installed by 2008. The j.a.panese Robot a.s.sociation predicts that by 2025 the personal robot industry, today worth $5 billion, will be worth $50 billion per year.) THE BOTTOM-UP APPROACH.
Because of the limitations of the top-down approach to artificial intelligence, attempts have been made to use a "bottom-up" approach instead, that is, to mimic evolution and the way a baby learns. Insects, for example, do not navigate by scanning their environment and reducing the image to trillions upon trillions of pixels that they process with supercomputers. Instead insect brains are composed of "neural networks," learning machines that slowly learn how to navigate in a hostile world by b.u.mping into it. At MIT, walking robots were notoriously difficult to create via the top-down approach. But simple buglike mechanical creatures that b.u.mp into the environment and learn from scratch can successfully scurry around the floor at MIT within a matter of minutes.
Rodney Brooks, director of MIT"s famed Artificial Intelligence Laboratory, famous for its huge, lumbering "top-down" walking robots, became a heretic when he explored the idea of tiny "insectoid" robots that learned to walk the old-fashioned way, by stumbling and b.u.mping into things. Instead of using elaborate computer programs to mathematically compute the precise position of their feet as they walked, his insectoids used trial and error to coordinate their leg motions using little computer power. Today many of the descendants of Brooks"s insectoid robots are on Mars gathering data for NASA, scurrying across the bleak Martian landscape with a mind of their own. Brooks believes that his insectoids are ideally suited to explore the solar system.
One of Brooks"s projects has been COG, an attempt to create a mechanical robot with the intelligence of a six-month-old child. On the outside COG looks like a jumble of wires, circuits, and gears, except that it has a head, eyes, and arms. No laws of intelligence have been programmed into it. Instead it is designed to focus its eyes on a human trainer, who tries to teach it simple skills. (One researcher who became pregnant made a bet as to which would learn faster, COG or her child by the age of two. The child far surpa.s.sed COG.) For all the successes in mimicking the behavior of insects, robots using neural networks have performed miserably when their programmers have tried to duplicate in them the behavior of higher organisms like mammals. The most advanced robot using neural networks can walk across the room or swim in water, but it cannot jump and hunt like a dog in the forest, or scurry around the room like a rat. Many large neural network robots may consist of tens to perhaps hundreds of "neurons" the human brain, however, has over 100 billion neurons. C. elegans, a very simple worm whose nervous system has been completely mapped by biologists, has just over 300 neurons in its nervous system, making its nervous system perhaps one of the simplest found in nature. But there are over 7,000 synapses between these neurons. As simple as C. elegans is, its nervous system is so complex that no one has yet been able to construct a computer model of this brain. (In 1988 one computer expert predicted that by now we should have robots with about 100 million artificial neurons. Actually, a neural network with 100 neurons is considered exceptional.) The supreme irony is that machines can effortlessly perform tasks that humans consider "hard," such as multiplying large numbers or playing chess, but machines stumble badly when asked to perform tasks that are supremely "easy" for human beings, such as walking across a room, recognizing faces, or gossiping with a friend. The reason is that our most advanced computers are basically just adding machines. Our brain, however, is exquisitely designed by evolution to solve the mundane problems of survival, which require a whole complex architecture of thought, such as common sense and pattern recognition. Survival in the forest did not depend on calculus or chess, but on evading predators, finding mates, and adjusting to changing environments.
MIT"s Marvin Minsky, one of the original founders of AI, summarizes the problems of AI in this way: "The history of AI is sort of funny because the first real accomplishments were beautiful things, like a machine that could do proofs in logic or do well in a calculus course. But then we started to try to make machines that could answer questions about the simple kinds of stories that are in a first-grade reader book. There"s no machine today that can do that."
Some believe that eventually there will be a grand synthesis between the two approaches, the top-down and bottom-up, which may provide the key to artificial intelligence and humanlike robots. After all, when a child learns, although he first relies mainly on the bottom-up approach, b.u.mping into his surroundings, eventually he receives instruction from parents, books, and schoolteachers, and learns from the top-down approach. As an adult, we constantly blend these two approaches. A cook, for example, reads from a recipe but also constantly samples the dish as it is cooking.
Hans Moravec says, "Fully intelligent machines will result when the mechanical golden spike is driven uniting the two efforts," probably within the next forty years.
EMOTIONAL ROBOTS?.
One consistent theme in literature and art is the mechanical being that yearns to become human, to share in human emotions. Not content to be made of wires and cold steel, it wishes to laugh, cry, and feel all the emotional pleasures of a human being.
Pinocchio, for example, was the puppet that wanted to become a real boy. The Tin Man in the The Wizard of Oz wanted to have a heart. And Data, on Star Trek, is a robot that can outperform all humans in strength and intelligence, yet still yearns to become human.
Some people have even suggested that our emotions represent the highest quality of what it means to be human. No machine will ever be able to thrill at a blazing sunset or laugh at a humorous joke, they claim. Some say that it is impossible for machines ever to have emotions, since emotions represent the pinnacle of human development.
But the scientists working on AI and trying to break down emotions paint a different picture. To them emotions, far from being the essence of humanity, are actually a by-product of evolution. Simply put, emotions are good for us. They helped us to survive in the forest, and even today they help us to navigate the dangers of life.
For example, "liking" something is very important evolutionarily, because most things are harmful to us. Of the millions of objects that we b.u.mp into every day, only a handful are beneficial to us. Hence to "like" something is to make a distinction between one out of the tiny fraction of things that can help us over against the millions of things that might hurt us.
Similarly, jealousy is an important emotion, because our reproductive success is vital in ensuring the survival of our genes to the next generation. (In fact, that is why there are so many emotionally charged feelings related to s.e.x and love.) Shame and remorse are important because they help us to learn the socialization skills necessary to function in a cooperative society. If we never say we"re sorry, eventually we will be expelled from the tribe, diminishing our chances of surviving and pa.s.sing on our genes.
Loneliness, too, is an essential emotion. At first loneliness seems to be unnecessary and redundant. After all, we can function alone. But longing to be with companions is also important for our survival, since we depend on the resources of the tribe to survive.
In other words, when robots become more advanced, they, too, might be equipped with emotions. Perhaps robots will be programmed to bond with their owners or caretakers, to ensure that they don"t wind up in the garbage dump. Having such emotions would help to ease their transition into society, so that they could be helpful companions, rather than rivals of their owners.
Computer expert Hans Moravec believes that robots will be programmed with emotions such as "fear" to protect themselves. For example, if a robot"s batteries are running down, the robot "would express agitation, or even panic, with signals that humans can recognize. It would go to the neighbors and ask them to use their plug, saying, "Please! Please! I need this! It"s so important, it"s such a small cost! We"ll reimburse you!""
Emotions are vital in decision making, as well. People who have suffered a certain kind of brain injury lack the ability to experience emotions. Their reasoning ability is intact, but they cannot express any feelings. Neurologist Dr. Antonio Damasio of the University of Iowa College of Medicine, who has studied people with these types of brain injuries, concludes that they seem "to know, but not to feel."
Dr. Damasio finds that such individuals are often paralyzed in making the smallest decisions. Without emotions to guide them, they endlessly debate over this option or that option, leading to crippling indecision. One patient of Dr. Damasio spent half an hour trying to decide the date of his next appointment.
Scientists believe that emotions are processed in the "limbic system" of the brain, which lies deep in the center of our brain. When people suffer from a loss of communication between the neocortex (which governs rational thinking) and the limbic system, their reasoning powers are intact but they have no emotions to guide them in making decisions. Sometimes we have a "hunch" or a "gut reaction" that propels our decision making. People with injuries that effect the communication between the rational and emotional parts of the brain do not have this ability.
For example, when we go shopping we unconsciously make thousands of value judgments about almost everything we see, such as "This is too expensive, too cheap, too colorful, too silly, or just right." For people with this type of brain injury, shopping can be a nightmare because everything seems to have the same value.
As robots become more intelligent and are able to make choices of their own, they could likewise become paralyzed with indecision. (This is reminiscent of the parable of the donkey sitting between two bales of hay that eventually dies of starvation because it cannot decide which to eat.) To aid them, robots of the future may need to have emotions hardwired into their brains. Commenting on the lack of emotions in robots, Dr. Rosalind Picard of the MIT Media Lab says, "They can"t feel what"s most important. That"s one of their biggest failings. Computers just don"t get it."
As Russian novelist Fyodor Dostoevsky wrote, "If everything on Earth were rational, nothing would happen."
In other words, robots of the future may need emotions to set goals and to give meaning and structure to their "lives," or else they will find themselves paralyzed with infinite possibilities.
ARE THEY CONSCIOUS?.
There is no universal consensus as to whether machines can be conscious, or even a consensus as to what consciousness means. No one has come up with a suitable definition of consciousness.
Marvin Minsky describes consciousness as more of a "society of minds," that is, the thinking process in our brain is not localized but spread out, with different centers competing with one another at any given time. Consciousness may then be viewed as a sequence of thoughts and images issuing from these different, smaller "minds," each one grabbing and competing for our attention.
If this is true, perhaps "consciousness" has been overblown, perhaps there have been too many papers devoted to a subject that has been overmystified by philosophers and psychologists. Maybe defining consciousness is not so hard. As Sydney Brenner of the Salk Inst.i.tute in La Jolla says, "I predict that by 2020-the year of good vision-consciousness will have disappeared as a scientific problem.... Our successors will be amazed by the amount of scientific rubbish discussed today-that is, if they have the patience to trawl through the electronic archives of obsolete journals."
AI research has been suffering from "physics envy," according to Marvin Minsky. In physics the holy grail has been to find a simple equation that will unify the physical forces of the universe into a single theory, creating a "theory of everything." AI researchers, overly influenced by this idea, have tried to find a single paradigm that would explain consciousness. But such a simple paradigm may not exist, according to Minsky.
(Those in the "constructionist" school, like myself, believe that instead of endlessly debating whether thinking machines can be created or not, one should instead try to build one. Regarding consciousness, there is probably a continuum of consciousness, from a lowly thermostat that monitors the temperature in a room to the self-aware organisms that we are today. Animals may be conscious, but they do not possess the level of consciousness of a human being. One should try, therefore, to categorize all the various types and levels of consciousness rather than debate philosophical questions about the meaning of consciousness. Robots may eventually attain a "silicon consciousness." Robots, in fact, may one day embody an architecture for thinking and processing information that is different from ours. In the future, advanced robots might blur the difference between syntax and semantics, so that their responses will be indistinguishable from the responses of a human. If so, the question of whether they really "understand" the question will be largely irrelevant. A robot that has perfect mastery of syntax, for all practical purposes, understands what is being said. In other words, a perfect mastery of syntax is understanding.) COULD ROBOTS BE DANGEROUS?.
Because of Moore"s law, which states that computer power doubles every eighteen months, it is conceivable that within a few decades robots will be created that have the intelligence, say, of a dog or a cat. But by 2020 Moore"s law may well collapse and the age of silicon could come to an end. For the past fifty years or so the astounding growth in computer power has been fueled by the ability to create tiny silicon transistors, tens of millions of which can easily fit on your fingernail. Beams of ultraviolet radiation are used to etch microscopic transistors onto wafers made of silicon. But this process cannot last forever. Eventually, these transistors could become so small that they reach the size of molecules, and the process will break down. Silicon Valley could become a Rust Belt after 2020, when the age of silicon finally comes to an end.
The Pentium chip in your laptop computer has a layer about twenty atoms across. By 2020 that Pentium chip might consist of a layer only five atoms across. At that point the Heisenberg uncertainty principle kicks in, and you no longer know where the electron is. Electricity will then leak out of the chip and the computer will short-circuit. At that point, the computer revolution and Moore"s law will hit a dead end because of the laws of the quantum theory. (Some people have claimed that the digital era is the "victory of bits over atoms." But eventually, when we hit the limit of Moore"s law, atoms may have their revenge.) Physicists are now working on the post-silicon technology that will dominate the computer world after 2020, but so far with mixed results. As we have seen, a variety of technologies are being studied that may eventually replace silicon technology, including quantum computers, DNA computers, optical computers, atomic computers, and so forth. But each of them faces huge hurdles before it can take on the mantle of silicon chips. Manipulating individual atoms and molecules is a technology that is still in its infancy, so making billions of transistors that are atomic in size is still beyond our ability.
But a.s.sume, for the moment, that physicists are capable of bridging the gap between silicon chips and, say, quantum computers. And a.s.sume that some form of Moore"s law continues into the post-silicon era. Then artificial intelligence might become a true possibility. At that point robots might master human logic and emotions and pa.s.s the Turing test every time. Steven Spielberg explored this question in his movie Artificial Intelligence: AI, where the first robot boy was created that could exhibit emotions, and was hence suitable for adoption into a human family.
This raises the question: could such robots be dangerous? The answer is likely yes. They could become dangerous once they have the intelligence of a monkey, which is self-aware and can create its own agenda. It may take many decades to reach such a point, so scientists will have plenty of time to observe robots before they pose a threat. For example, a special chip could be placed in their processors that could prevent them from going on the rampage. Or they could have a self-destruct or deactivation mechanism that would turn them off in case of an emergency.
Arthur C. Clarke wrote, "It is possible that we may become pets of the computers, leading pampered existences like lapdogs, but I hope that we will always retain the ability to pull the plug if we feel like it."
A more mundane threat is that our infrastructure depends on computers. Our water and electricity grid, not to mention transportation and communications networks, will be increasingly computerized in the future. Our cities have become so complex that only complex and intricate computer networks can regulate and monitor our vast infrastructure. In the future it will become increasingly important to add artificial intelligence to this computer network. A failure or breakdown in this all-pervasive computer infrastructure could paralyze a city, country, or even a civilization.
Will computers eventually surpa.s.s us in intelligence? Certainly, there is nothing in the laws of physics to prevent that. If robots are neural networks capable of learning, and they develop to the point where they can learn faster and more efficiently than we can, then it"s logical that they might eventually surpa.s.s us in reasoning. Moravec says, "[The postbiological world] is a world in which the human race has been swept away by the tide of cultural change, usurped by its own artificial progeny...When that happens, our DNA will find itself out of a job, having lost the evolutionary race to a new kind of compet.i.tion."
Some inventors, such as Ray Kurzweil, have even predicted that this time will come soon, earlier rather than later, even within the next few decades. Perhaps we are creating our evolutionary successors. Some computer scientists envision a point they call "singularity," when robots will be able to process information exponentially fast, creating new robots in the process, until their collective ability to absorb information advances almost without limit.
So in the long term some have advocated a merging of carbon and silicon technology, rather than waiting for our extinction. We humans are mainly based on carbon, but robots are based on silicon (at least for the moment). Perhaps the solution is to merge with our creations. (If we ever encounter extraterrestrials, we should not be surprised to find that they are part organic, part mechanical to withstand the rigors of s.p.a.ce travel and to flourish in hostile environments.) In the far future, robots or humanlike cyborgs may even grant us the gift of immortality. Marvin Minsky adds, "What if the sun dies out, or we destroy the planet? Why not make better physicists, engineers, or mathematicians? We may need to be the architects of our own future. If we don"t our culture could disappear."
Moravec envisions a time in the distant future when our neural architecture will be transferred, neuron for neuron, directly into a machine, giving us, in a sense, immortality. It"s a wild thought, but not beyond the realm of possibility. So, according to some scientists viewing the far future, immortality (in the form of DNA-enhanced or silicon bodies) may be the ultimate future of humanity.
The idea of creating thinking machines that are at least as smart as animals and perhaps as smart or smarter than us could become a reality if we can overcome the collapse of Moore"s law and the commonsense problem, perhaps even late in this century. Although the fundamental laws of AI are still being discovered, progress in this area is happening extremely fast and is promising. Given that, I would cla.s.sify robots and other thinking machines as a Cla.s.s I impossibility.
8: EXTRATERRESTRIALS AND UFOS.
Either we are alone in the universe, or we are not.
Either thought is frightening.
-ARTHUR C. CLARKE.
A gargantuan s.p.a.ceship, stretching miles across, looms directly over Los Angeles, filling up the entire sky and ominously darkening the entire city. All over the world, saucer-shaped fortresses position themselves over the major cities of the world. Hundreds of jubilant spectators, wishing to welcome the beings from another planet to L.A., gather on top of a skysc.r.a.per to reach out to their celestial guests.
After days of hovering silently over L.A., the s.p.a.ceship"s belly slowly opens up. A searing blast of laser light shoots out, incinerating the skysc.r.a.per, unleashing a tidal wave of destruction that rolls across the entire city, reducing it to burned rubble within seconds.
In the movie Independence Day aliens represent our deepest fears. In the movie E.T. we project onto aliens our own dreams and fantasies. Throughout history people have been fascinated by the thought of alien creatures that inhabit other worlds. As far back as 1611, in his treatise Somnium, the astronomer Johannes Kepler, using the best scientific knowledge of the time, speculated about a trip to the moon during which one might encounter strange aliens, plants, and animals. But science and religion often collide on the subject of life in s.p.a.ce, sometimes with tragic results.
A few years earlier, in 1600, former Dominican monk and philosopher Giordano Bruno was burned alive in the streets of Rome. To humiliate him, the Church hung him upside down and stripped him naked before finally burning him at the stake. What made the teachings of Bruno so dangerous? He had asked a simple question: is there life in outer s.p.a.ce? Like Copernicus, he believed that the Earth revolved around the sun, but unlike Copernicus, he believed that there could be countless numbers of creatures like us living in outer s.p.a.ce. (Rather than entertain the possibility of billions of saints, popes, churches, and Jesus Christs in outer s.p.a.ce, it was more convenient for the Church simply to burn him.) For four hundred years the memory of Bruno has haunted the historians of science. But today Bruno has his revenge every few weeks. About twice a month a new extrasolar planet is discovered orbiting a star in s.p.a.ce. Over 250 planets have now been doc.u.mented orbiting other stars in s.p.a.ce. Bruno"s prediction of extrasolar planets has been vindicated. But one question still lingers. Although the Milky Way galaxy may be teaming with extrasolar planets, how many of them can support life? And if intelligent life does exist in s.p.a.ce, what can science say about it?
Hypothetical encounters with extraterrestrials, of course, have fascinated society and thrilled readers and movie audiences for generations. The most famous incident occurred on October 30, 1938, when Orson Welles decided to play a Halloween trick on the American public. He took the basic plot of H. G. Wells"s War of the Worlds and made a series of short news announcements on CBS national radio, interrupting dance music to reenact, hour by hour, the invasion of Earth by Martians and the subsequent collapse of civilization. Millions of Americans were panic-stricken over the "news" that machines from Mars had landed in Grover"s Mill, New Jersey, and were unleashing death rays to destroy entire cities and conquer the world. (Newspapers later recorded that spontaneous evacuations took place as people fled the area, with eyewitnesses claiming they could smell poison gas and see flashes of light in the distance.) Fascination with Mars peaked again in the 1950s, when astronomers noticed a strange marking on Mars that looked like a gigantic M that was hundreds of miles across. Commentators noted that perhaps the M stood for "Mars," and Martians were peacefully signaling their presence to earthlings, like cheerleaders spelling out their team"s name in a football stadium. (Others noted darkly that the M marking was actually a W, and W stands for "war." In other words, the Martians were actually declaring war on the Earth!) The mini-panic eventually subsided when this mysterious M disappeared just as abruptly as it had appeared. In all likelihood this marking was caused by a dust storm that covered the entire planet, except for the tops of four large volcanoes. The tops of these volcanoes roughly took on the shape of an M or a W.
THE SCIENTIFIC SEARCH FOR LIFE.
Serious scientists studying the possibility of extraterrestrial life state that it is impossible to say anything definitive about such life, a.s.suming that it exists. Nonetheless, we can make some general arguments on the nature of alien life based on what we know of physics, chemistry, and biology.
First, scientists believe that liquid water will be the key factor in creating life in the universe. "Follow the water" is the mantra recited by astronomers as they search for evidence of life in s.p.a.ce. Liquid water, unlike most liquids, is a "universal solvent" that can dissolve an astonishing variety of chemicals. It is an ideal mixing bowl to create increasingly complex molecules. Water is also a simple molecule that is found throughout the universe, while other solvents are quite rare.
Second, we know that carbon is a likely component in creating life because it has four bonds and hence the ability to bind to four other atoms and create molecules of incredible complexity. In particular, it is easy to form long carbon chains, which become the basis for hydrocarbon and organic chemistry. Other elements with four bonds do not have such a rich chemistry.
The most vivid ill.u.s.tration of the importance of carbon was the famous experiment conducted by Stanley Miller and Harold Urey in 1953, which showed that the spontaneous formation of life may be a natural by-product of carbon chemistry. They took a solution of ammonia, methane, and other toxic chemicals that they believed were found in the early Earth, put it in a flask, exposed it to a small electrical current, and then simply waited. Within one week they could see evidence of amino acids forming spontaneously in the flask. The electrical current was sufficient to break apart the carbon bonds within ammonia and methane and then rearrange the atoms into amino acids, the precursors of proteins. In some sense, life can form spontaneously. Since then, amino acids have been found inside meteorites and also in gas clouds in deep s.p.a.ce.
Third, the fundamental basis of life is the self-replicating molecule called DNA. In chemistry, self-replicating molecules are extremely rare. It took hundreds of millions of years to form the first DNA molecule on Earth, probably deep in the oceans. Presumably, if one could perform the Miller-Urey experiment for a million years in the oceans, DNA-like molecules would spontaneously form. One likely site where the first DNA molecule on Earth might have occurred early in the Earth"s history is near volcano vents on the ocean bottom, since the activity of the vents would create a convenient supply of energy for the early DNA molecule and cells, before the arrival of photosynthesis and plants. It is not known if other carbon-based molecules besides DNA can also be self-replicating, but it is likely that other self-replicating molecules in the universe will resemble DNA in some way.
So life probably requires liquid water, hydrocarbon chemicals, and some form of self-replicating molecule like DNA. Using these broad criteria one can derive a rough estimate for the frequency of intelligent life in the universe. In 1961 Cornell University astronomer Frank Drake was one of the first to make a rough estimate. If you start with 100 billion stars in the Milky Way galaxy, you can estimate what fraction of them have stars like our sun. Of these, you can estimate what fraction have solar systems revolving around them.
More specifically, Drake"s equation calculates the number of civilizations in the galaxy by multiplying several numbers together, including * the rate at which stars are born in the galaxy, * the fraction of these stars that have planets, * the number of planets for each star that have the conditions for life, * the fraction of planets that actually develop life, * the fraction that develop intelligent life, * the fraction that are willing and able to communicate, and * the expected lifetime of a civilization.
By taking reasonable estimates and by multiplying these successive probabilities, one realizes that there could be between 100 and 10,000 planets in the Milky Way galaxy alone that are able to harbor intelligent life. If these intelligent life-forms are uniformly scattered across the Milky Way galaxy, then we should expect to find such a planet just a few hundred light-years from Earth. In 1974 Carl Sagan estimated that there might be up to a million such civilizations within our Milky Way galaxy alone.
This theorizing, in turn, has provided added justification for those looking to find evidence for extraterrestrial civilizations. Given the favorable estimate of planets capable of harboring intelligent life-forms, scientists have begun seriously to look for the radio signals such planets may have emitted, much like the TV and radio signals that our own planet has been emitting for the past fifty years.
LISTENING TO ET.
The Search for Extraterrestrial Intelligence (SETI) project dates back to an influential paper written in 1959 by physicists Giuseppe Cocconi and Philip Morrison, who suggested that listening to microwave radiation of a frequency between 1 and 10 gigahertz would be the most suitable way to eavesdrop on extraterrestrial communications. (Below 1 gigahertz, signals would be washed out by radiation emitted by fast-moving electrons; beyond 10 gigahertz, noise from oxygen and water molecules in our own atmosphere would interfere with any signals.) They selected 1,420 gigahertz as the most promising frequency in which to listen to signals from outer s.p.a.ce, since that was the emission frequency for ordinary hydrogen gas, the most plentiful element in the universe. (Frequencies around that range are nicknamed the "watering hole," given their convenience for extraterrestrial communication.) Searches for evidence of intelligent signals near the watering hole, however, have been disappointing. In 1960 Frank Drake initiated Project Ozma (named after the Queen of Oz) to search for signals using the 25-meter radio telescope in Green Bank, West Virginia. No signals were ever found, either in Project Ozma or in other projects that, in fits and starts, tried to scan the night sky over the years.
In 1971 an ambitious proposal was made by NASA to fund SETI research. Dubbed Project Cyclops, the effort involved fifteen hundred radio telescopes at a cost of $10 billion. Not surprisingly, the research never went anywhere. Funding did become available for a much more modest proposal, to send a carefully coded message to alien life in outer s.p.a.ce. In 1974 a coded message of 1,679 bits was transmitted via the giant Arecibo radio telescope in Puerto Rico toward the Globular Cl.u.s.ter M13, about 25,100 light-years away. In this short message, scientists created a 23 73 dimensional grid pattern that plotted the location of our solar system, containing an ill.u.s.tration of human beings and some chemical formulae. (Because of the large distances involved, the earliest date for a reply from outer s.p.a.ce would be 52,174 years from now.) Congress has not been impressed with the significance of these projects, even after a mysterious radio signal, called the "Wow" signal, was received in 1977. It consisted of a series of letters and numbers that seemed to be nonrandom and seemed to be signaling the existence of intelligence. (Some who have seen the Wow signal have not been convinced.) In 1995, frustrated by the lack of funding from the federal government, astronomers turned to private sources to start the nonprofit SETI Inst.i.tute in Mountain View, California, to centralize SETI research and initiate Project Phoenix to study one thousand nearby sunlike stars in the 1,200-to 3,000-megahertz range. Dr. Jill Tarter (the model for the scientist played by Jodie Foster in the movie Contact) was named director. (The equipment used in the project was so sensitive that it could pick up the emissions from an airport radar system 200 light-years away.) Since 1995 the SETI Inst.i.tute has scanned more than one thousand stars at a cost of $5 million per year. But there have been no tangible results. Nevertheless, Seth Shostak, senior astronomer at SETI, optimistically believes that the 350-antenna Allen Telescope Array now being built 250 miles northeast of San Francisco "will trip across a signal by the year 2025."