published: Feb. 25, 2007, recorded: July 2005, views: 3915
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Watch videos: (click on thumbnail to launch)
Part 2 08:37
There is a variety of learning methods capable of inducing predictive models from data. In order to be able to decide which method to use on a particular data set of interest, we need systematic way to evaluate and compare the performance of different methods. This talk describes and illustrates the key criteria and methods for performance assessment and comparison. It first introduces predictive error as the most widely used criteria for evaluating predictive models. Since we want to evaluate predictive error of the model on independent test data, unseen in the learning process, the first part of the talk focuses on methods for evaluating predictive error on test data and resolving the well known bias-variance trade-off in machine learning. We also overview techniques for pair-wise comparison of learning methods' performance. While first part of the talk deals with classification task (i.e., predicting discrete variables) only, the second part of the talk provides wider perspective on evaluating methods for predicting class probability distribution, numeric variables, and dealing with situations where the error depends on type of the misclassification. Finally, we learn how to assess other aspects of predictive performance, such as complexity of the induced models and their comprehensibility.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !