When Logical Inference Helps Determining Textual Entailment (and When it Doesn´t)

author: Johan Bos, Sapienza University of Rome
published: Feb. 25, 2007,   recorded: April 2006,   views: 3606

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

We compare and combine two methods to approach the second textual entailment challenge (RTE-2): a shallow method based mainly on word-overlap and a method based on logical inference, using first-order theorem proving and model building techniques. We use a machine learning technique to combine features of both methods. We submitted two runs, one using only the shallow features, yielding an accuracy of 61.6%, and one using features of both methods, performing with an accuracy score of 60.6%. These figures suggest that logical inference didn´t help much. Closer inspection of the results revealed that only for some of the subtasks logical inference played a significant role in performance. We try to explain the reason for these results.

See Also:

Download slides icon Download slides: LaSapienza.ppt (74.0 KB)

Download slides icon Download slides: RTE2panel.ppt (32.5 KB)

Download slides icon Download slides: RTE2.ppt (168.5 KB)

Download slides icon Download slides: SMART.ppt (504.5 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: