Beyond Jeopardy! The Future of Watson

author: David Ferrucci, IBM Thomas J. Watson Research Center
published: July 10, 2012,   recorded: June 2012,   views: 5338
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Computer systems that directly and accurately understand and answer people’s questions over a broad domain of human knowledge have been envisioned by scientists and writers since the advent of computers themselves. Toy solutions are easy to create when the knowledge is narrowly bounded and the queries anticipated by the programmers. The real goal for Artificial Intelligence is for the machine to digest language as fluently and freely as humans, eliminating the need to manually and explicitly formalize the knowledge expressly for the machine. Being able to leverage knowledge as it is prolifically and naturally captured and communicated by humans would facilitate a new era in informed decision making, giving users efficient, context-aware and precise access to the enormous wealth of knowledge humans naturally create and enrich every day. Applications in business intelligence, healthcare, customer support, social computing, science and government could all benefit from computer systems capable of deeper language understanding. The DeepQA project at IBM is aimed at exploring how advancing and integrating Natural Language Processing (NLP), Information Retrieval (IR), Machine Learning (ML), Knowledge Representation and Reasoning (KR&R) and massively parallel computation can advance the science and application of automatic Question Answering and more general natural language understanding. An exciting proof-point in this challenge was developing a computer system that could successfully compete against top human players at the Jeopardy! quiz show.

Attaining champion-level performance at Jeopardy! requires a computer to rapidly and accurately answer rich open-domain questions, and to predict its own performance on any given question. The system must deliver high degrees of precision and confidence over a very broad range of knowledge and natural language content with a 3-second response time. To do this, the DeepQA team advanced a broad array of NLP techniques to find, generate, evidence and analyze many competing hypotheses over large volumes of natural language content to build Watson (www.ibmwatson.com). An important contributor to Watson’s success is its ability to automatically learn and combine accurate confidences across a wide array of algorithms and over different dimensions of evidence. Watson produced accurate confidences to know when to “buzz in” against its competitors and how much to bet. High precision and accurate confidence computations are critical for real business settings where helping users focus on the right content sooner and with greater confidence can make all the difference. The need for speed and high precision demands a massively parallel computing platform capable of generating, evaluating and combing thousands of hypotheses and their associated evidence.

In this talk, I will introduce the audience to the Jeopardy! Challenge; explain how Watson was built to ultimately defeat the two most celebrated human champions of all time. I will discuss how Watson will advance beyond Jeopardy! to solve real problems in healthcare through natural language dialog, ultimately taking another step towards Turing’s vision.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: