There's a short video on this. There's also a longer PBS NOVA documentary on this. You can see all of the clues and answers in this series of games here, here, and here. To see the correct answer and which contestant got the correct answer, you must click on the monetary value for each clue.
Here are three examples. Under the category “Literary Character APB (All Points Bulletin),” a $600 clue is “Wanted for general evilness; last seen at the Tower of Barad-Dur; it’s a giant eye, folks, kinda hard to miss.” The correct answer is “What is Sauron?”
Under the category “Dialing for Dialects,” a clue for $800 is “While Maltese borrows many words from Italian, it developed from a dialect of this Semitic language.” The correct answer is “What is Arabic?”
Under the category “Church” and “State,” a clue for $1,600 is “It can mean to develop gradually in the mind or to carry during pregnancy.” The correct answer is “What is gestate?”
Watson correctly answered these clues and many more in playing the game against Ken Jennings and Brad Rutter. Jennings was the all-time champion, having won 74 Jeopardy! Games in a row, winning prize money of $2,520, 700. Rutter was the all-time money winner, winning $4,355,102.
IBM had built the chess-playing machine Deep Blue that defeated Gary Kasparov, the reigning world champion in chess, in 1997. This was impressive, but it did not show that AI machines are capable of general intelligence and flexible judgment comparable to that of human beings. Chess is a restricted domain with clear rules and a clear objective (capturing the King). By contrast, success in playing Jeopardy! requires general knowledge of history, culture, literature, and science. It also depends on flexibility in interpreting puns, metaphors, and other nuances of language.
The scientists decided that if they could build an AI machine that could defeat a Jeopardy! champion like Ken Jennings, this would show that artificial intelligence was finally moving towards general intelligence like that of human beings. In 2011, Watson did indeed defeat Jennings and Rutter in playing the game.
In that game, Watson did not have the capacities for hearing speech or reading texts, but now it has those capacities. Scientists at IBM want Watson to read massive quantities of medical literature so that it can become a medical diagnostician. It might also read legal texts, so that it can become a legal consultant.
In much of the older AI research, it was assumed that intelligence could be reduced to facts and rules—accumulate lots of factual data and rules for inferring conclusions from those facts. But, in fact, much of what we identify as intelligence is intuitive judgment that is acquired by learning from experience, which cannot be completely reduced to rules and facts.
Watson’s great achievement is that it can learn on its own. It has accumulated massive quantities of data from encyclopedias, novels, newspapers, and all of Wikipedia—the equivalent of thousands of books. Then it surveys this data looking for patterns. It has also surveyed 10,000 of old Jeopardy! questions and answers looking for patterns of success and failure.
Machine learning from examples allows machines to acquire knowledge that cannot be reduced to facts and rules. For example, the skills for speech recognition and reading texts cannot be achieved through a simple set of rules. How do we recognize the letter “A”? There are many different fonts in which this letter might be printed, and the hand-written letter differs in the hand-writing style of different writers. But if you give an intelligent machine millions of examples of the printed and hand-written letter “A,” and the machine looks for recurrent patterns, it can learn to recognize this letter. Similarly, speakers differ in how they pronounce letters and words, and so there is no clear set of rules for identifying spoken letters and words. But if you give an intelligent machine millions of examples of how a certain letter or word is pronounced by different speakers, the machine can learn to identify the patterns.
From his experience in competing against Watson, Jennings decided that Watson was a lot like the human players of Jeopardy!. “Watson has lots in common with a top-ranked human Jeopardy! player,” Jennings observed. “It’s very smart, very fast, speaks in an uneven monotone, and has never known the touch of a woman.”
Jennings also decided that Watson’s way of solving Jeopardy! puzzles was similar to his own:
“The computer’s techniques for unraveling Jeopardy! clues sounded just like mine. That machine zeroes in on key words in a clue, then combs its memory (in Watson’s case, a 15-terbyte data bank of human knowledge) for clusters of associations with those words. It rigorously checks the top hits against all the contextual information it can muster: the category name; the kind of anser being sought; the time, place, and gender hinted at in the clue; and so on. And when it fees ‘sure’ enough, it decides to buzz. This is all an instant, intuitive process for a human Jeopardy! player, but I felt convinced that under the hood my brain was doing more or less the same thing.”
But does Watson really think? John Searle answered no, the day after Watson won the Jeopardy! competition. “IBM invented an ingenious program—not a computer that can think,” he declared. “Watson did not understand the questions, nor its answers, nor that some of its answers were right and some wrong, nor that it was playing a game, nor that it won—because it doesn’t understand anything.”
Some computer scientists have responded to this question of whether a machine can think by asking, “Can a submarine swim?” Submarines don’t swim the way fish swim or the way some reptiles and mammals swim. But in some ways, submarines swim better than fish, reptiles, and mammals. Similarly, Watson certainly doesn’t think the way human beings or other animals think, but it can solve problems and answer difficult questions about the world, in ways that have persuaded many people that is really is thinking.
But can we trust our perception that a machine is thinking? Alan Turing's "imitation game" assumes that if a machine could successfully imitate a human thinker, so that we could not distinguish between the machine and humans through carrying on an exchange of questions and answers with them, that would show that the machine had achieved something like human-level intelligence. IBM is hoping to show in a few years that Watson can pass the Turing Test. Searle has objected, however, that this is not truly a test of human-level intelligence.
The scientists at IBM who built Watson admit that it does not have one crucial feature of human thinking—emotion or feeling. It did not feel any fear of failure when it played Jeopardy! And it did not feel any pride in winning the game. The scientists behind Watson did feel such emotions.
When the IBM scientists were testing Watson, they set up Jeopardy! games where Watson was playing against IBM employees who were good Jeopardy! players. When a comedian hired to host practice matches ridiculed Watson’s more obtuse answers (Rembrandt rather than Pollock for a “late ’40s artist”), David Ferrucci, director of the Watson program, complained: “He’s making fun of a defenseless computer.” When Ferruci brought his child to see one of the practice sessions, his child said: “Daddy, why is that man making fun of Watson?”
Does human-level intelligence require not just abstract reason but also emotional drives, because human minds care about what they’re thinking and doing? How could emotion be put into a machine?
One possibility is that an artificial brain might have to be put into an artificial body that would have something like a neuroendocrine system that would generate emotional experience.
Another possibility is building cyborgs—cybernetic organisms—in which human brains and bodies have an interface with intelligent machines. Thus, human intelligence is augmented by machines, but it’s combined with all the normal emotional drives of human beings. In a way, many human beings today have already become cyborgs because the intelligence of their brains is augmented by machines through interfaces with computers and smart phones. Over the next few years, that brain-machine interface will be put inside the human brain and body through neural implants.
Right now, the intelligence of many of us has been augmented by our computers and smart phones. We converse with our machines, and this conversation occurs through brain-machine interfaces in our typing fingers, our speaking voices, our hearing ears, and our seeing eyes. As these interfaces move to the surface of our bodies (as in Google Glass), electronic skin implants, and then inside our brain, we will have ever more direct access to all of human knowledge. Google Earth will give us instant views of every place on Earth. GPS will insure that we are never lost. Google Books will allow us to download every book that has ever been published. When we run out of storage space in our heads, we can store our knowledge in Google cloud computing.
This must be what Google cofounder Larry Page had in mind when he said:
"People always make the assumption that we're done with search. That's very far from the case. We're probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything . . . some people could call that artificial intelligence. . . . The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly."
Page has said that the ultimate goal is for us to merely think of a question, and then we instantly hear or see the answer.
This understanding of everything in the world that cyborgs could have must include understanding emotion. As I indicated in my post on Morris Hoffman's The Punisher's Brain, Judge Hoffman thinks that most trial judges show "our evolved retributive feelings" when they punish. "We get a gut, retributive, feeling about the sentence, and then move in one direction or another off that gut feeling based on information about the criminal that affects our views about special deterrence--the likelihood he will reoffend and the crimes he is likely to commit." So if IBM wants to teach Watson how to be a good judge, they might have to find a way to instill the "gut feelings" that are part of our evolved human nature.
We must wonder about the wisdom of our moving under the rule of "our new computer overlords." This has already begun. Most of the buy-sell decisions on Wall Street are being made by computers acting autonomously. Most of the infrastructure network of North America (electricity, water, and transportation) is controlled by computer systems connected to the Internet. Doctors are adopting expert computer systems for diagnosing their patients. The scientists at IBM are improving Watson so that it can make decisions for us in many areas of life. Much of the research on robot intelligence is funded by DARPA (The Defense Advanced Research Projects Agency), which is aimed at creating autonomous robotic weapons. The United States military already relies on many weaponized robots.
In his survey of the latest research in AI directed to producing AGI (artificial general intelligence) and then ASI (artificial super-intelligence), James Barrat (Our Final Invention) concludes that there's no reason that ASI will care about human beings, that such super-intelligence will be incomprehensible to us, and that this will lead to the extinction of our species. He also indicates, however, that only a few AI researchers (like Stephen Omohundro and Eliezer Yudkowsky) share his pessimistic vision of the perils of ASI. Most of the leading proponents of advanced AI research (like Ray Kurzweil and Rodney Brooks) are optimistic in their utopian vision of ASI as allowing human beings to finally fulfill the human dream, expressed by early modern philosophers and scientists like Descartes and Bacon, of completely mastering nature for human benefit, even including human immortality.