Using TREC for cross-comparison between classic IR and ontology-based search models at a Web scale

author: Miriam Fernández, Knowledge Media Institute (KMI), Open University (OU)
author: Vanessa Lopez, IBM Research Ireland
author: Marta Sabou, Vienna University of Technology
author: Victoria Uren, Knowledge Media Institute (KMI), Open University (OU)
author: David Vallet, Escuela Politecnica Superior, Autonomous University of Madrid
author: Enrico Motta, Knowledge Media Institute (KMI), Open University (OU)
author: Pablo Castells, Escuela Politecnica Superior, Autonomous University of Madrid
published: May 27, 2009,   recorded: April 2009,   views: 285
Categories

See Also:

Download slides icon Download slides: semseach09_fernandez_utcc_01.ppt (3.8┬áMB)


Help icon Streaming Video Help

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

The construction of standard datasets and benchmarks to evaluate ontology-based search approaches and to compare then against baseline IR models is a major open problem in the semantic technologies community. In this paper we propose a novel evaluation benchmark for ontology-based IR models based on an adaptation of the well-known Cranfield paradigm (Cleverdon, 1967) traditionally used by the IR community. The proposed benchmark comprises: 1) a text document collection, 2) a set of queries and their corresponding document relevance judgments and 3) a set of ontologies and Knowledge Bases covering the query topics. The document collection and the set of queries and judgments are taken from one of the most widely used datasets in the IR community, the TREC Web track. As a use case example we apply the proposed benchmark to compare a real ontology-based search model (Fernandez, et al., 2008) against the best IR systems of TREC 9 and TREC 2001 competitions. A deep analysis of the strengths and weaknesses of this benchmark and a discussion of how it can be used to evaluate other ontology-based search systems is also included at the end of the paper.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: