In a three day battle between human and technology, an IBM-produced computer called Watson played “Jeopardy!” against the show’s greatest contestants in history, Ken Jennings and Brad Rutter.

Could a computer ever beat a human in a game of knowledge and trivia? IBM was determined to find out.
Every few years, IBM sets itself a “Grand Challenge,” in order to further scientific advancement and to work beyond simply building computers for everyday consumers. These Grand Challenges aren’t problems that are easily solved — they are projects meant to take years of research and study to perfect.
One of the most publicized of these challenges was IBM’s attempt to beat a grandmaster in chess, which began in the early ’90s. The computer was dubbed “Deep Blue” and finally defeated the reigning world champion, Garry Kasparov, in 1997. Again, in 1999, IBM started a $100 million initiative, called “Blue Gene,” to build a massively parallel computer to study biological processes such as protein folding — in 2007, President Obama bestowed the National Medal of Technological Innovation on the project.
IBM’s most recent Grand Challenge has been the focus of scientific research for decades — to have a computer understand natural language. They intended to prove the abilities of their computer, Watson (named after IBM’s founder Thomas J. Watson) by having it compete against “Jeopardy!” champions.
This week, from Feb. 14-16, Watson competed against the two greatest “Jeopardy!” contestants of all time: Ken Jennings, known for his record-setting 74-game winning streak, and Brad Rutter, who has won the largest amount of money ever awarded on “Jeopardy!,” with cumulative earnings of more than $3 million. Certainly, Watson would need to be both fast and accurate to defeat his accomplished opponents.
As for its speed, Watson is blazing fast. In order to answer a question and search the millions of resources at its disposal, a normal consumer-grade computer would take hours to come up with the response. A normal “Jeopardy!” player does it in three to five seconds on average. In order to make Watson fast enough, IBM put the equivalent of 2,800 computers in parallel with each other — the computer takes up the same amount of space as about 10 refrigerators.
Watson’s processing power and storage of knowledge are not what make him unique or impressive, however — supercomputers have been around for dozens of years. The ability to understand the nuances of natural language, especially in the context of “Jeopardy!,” where clues incorporate puns and ambiguities, was its true challenge.
Despite what science fiction may portray in Hollywood, the scientific community has always struggled with making computers understand how we communicate. The top “Jeopardy!” players answer about half of the total questions and have an accuracy of 95 percent. Not only did Watson have to answer the questions with equal accuracy, but he also had to understand the questions being asked.
In 2006, early in the development of the project, Watson achieved only 15 percent accuracy in attempts to answer old “Jeopardy!” questions. Over the next few years, using a slew of algorithms, the IBM team was able to close in on the accuracy that Watson needed in order to be competitive at the highest level. IBM also utilized a new technique, called “machine learning” — a branch of artificial intelligence that allows computers to essentially program themselves.
The computer is fed a myriad of empirical data which it uses it to learn patterns and generalize from a wide range of information. Machine learning, along with the rest of the programming work done by IBM, brought Watson well into the range of champion players like Jennings and Rutter.

Despite the great advance in understanding natural language, the “Jeopardy!” competition is not a normal match. Watson does not have any sort of hearing or vision sensors. It does not read the clue off the screen or hear it from Alex Trebek. Rather, it is sent the question via a text file at the same moment the question is revealed to the human players. This is most likely an advantage, as Watson’s 2,800 cores can process the information faster than Jennings or Rutter ever could. Also, “Jeopardy!” players can only buzz in once Trebek is finished reading — if one buzzes in before hand, he gets locked out.
Watson, on the other hand, is sent an impulse the moment Trebek is finished, letting it know that it can buzz in. This unfair advantage was clearly visible in this week’s tournament — Watson would consistently beat Jennings and Rutter to the buzzer, leaving the human players to wait until the rare occasion where Watson wasn’t confident enough  in its answer to buzz in. After Tuesday’s game, Ken Jennings spoke with the Washington Post in a Q&A session.
“It’s just a matter of who masters buzzer rhythm best,” he said. “Watson does have a big advantage in this regard, since it can knock out a microsecond-precise buzzer every single time.”
The imbalance in play didn’t end there — early in Monday’s game, Watson quickly found the daily double question, as if it knew where it was. Well, that’s because, in a way, it did. IBM loaded years of daily double location statistics into Watson’s “brain,” so that it could remove them quickly from play. They did this because these questions are an easy way for the human players to catch up or gain a lot of points­ — Watson easily and swiftly removed four of the five daily doubles in the tournament.
Watson is a huge advancement in natural language understanding for computers, and will set the standard in the field for years to come. It’s an impressive achievement for a plethora of reasons, but the actual “Jeopardy!” competition was less than suspenseful ­— his advantage was clear from the start. Watson can undoubtedly compete with the best of human opponents and, if the game was less tipped in his favor, it would have made a much more engaging viewing experience. Instead, Watson ran away with the victory and left Jennings and Rutter in its dust.
Nonetheless, this week’s “Jeopardy!” competition will forever change the scope of what computers can accomplish, alongside former IBM Grand Challenges like Deep Blue. The future of Watson and its technology is bright — IBM hopes to apply the new technology to fields such as healthcare, financial services and even the government. IBM proved Watson’s worth this week, but it will be even more impressive in years to come.


Penney is a member of the class of 2012.



Research at Rochester: iGEM Team Saptasense finds sustainable solutions for maple sap

To what extent are they able to pursue their own experimental endeavors? iGEM’s Team Saptasense certainly found out over the course of this past summer and fall semester.

How to survive Thanksgiving with your family

At family gatherings, chaos is not a question of if but when. So how can you survive it?

Comic: UR sus

Failure to complete tasks results in expulsion from this school.