Empirical Analysis of Ranking Models for an Adaptable Dataset Search

author: Andre Paes Leme, Federal University of Rio de Janeiro
published: July 10, 2018,   recorded: June 2018,   views: 500
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Currently available datasets still have a large unexplored potential for interlinking. Ranking techniques contribute to this task by scoring datasets according to the likelihood of finding entities related to those of a target dataset. Ranked datasets can be either manually selected for standalone linking discovery tasks or automatically inspected by programs that would go through the ranking looking for entity links. In the first case, users typically choose datasets that seem more appropriate among those at the top of the ranking, having little tendency for an exhaustive selection over the entire ranking. On the other hand, automated processes would scan all datasets along a whole slice of the top of the ranking. Metrics such as nDCG better capture the degree of adherence of rankings to users expectations of finding the most relevant datasets at the very top of the ranking. Automatic processes, on the contrary, would benefit most from rankings that would have greater recall of datasets with related entities throughout the entire slice traversed. In this case, the Recall at Position k would better discriminate ranking models. This work presents empirical comparisons between different ranking models and argues that different algorithms could be used depending on whether the ranking is manually or automatically handled and, also, depending on the available metadata of the datasets. Experiments indicate that ranking algorithms that performed best with nDCG do not always have the best Recall at Position k, for high recall levels. Under the automatic perspective, the best algorithms may find the same number of datasets with related entities by inspecting a slice of the rank at least 40\% smaller. Under the manual perspective, the best algorithms may increase nDCG by 5-20\%, depending on the set of features.

See Also:

Download slides icon Download slides: eswc2018_leme_empirical_analysis_01.pdf (960.6 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: