Entropy Properties of a Decision Rule Class in Connection with machine learning abilities

author: Alexey Chervonenkis, Computer Learning Research Centre, Royal Holloway, University of London
published: Oct. 8, 2007,   recorded: September 2007,   views: 6144
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Many methods of Machine Learning are based on the idea of empirical risk minimisation. It is to find a decision rule or a model from some set which most perfectly fits the data presented in the training set. This idea is based on the large number law: empirical risk converges to real risk, if the training set is large enough. But if the class of decision rules or models is too large (in some sense) one meets the problem of oferfitting, the model perfectly corresponds to the data presented in the training set, but shows large errors on new data. It is due to the fact that only uniform convergence of empirical risk to the real risk guarantees closeness of the optimal model behaviour on the training set and on the new data. We introduce the notion of entropy of a decision rule class over a fixed sample sequence as log of the number of possible classifications of the sequence by the rules of the class. Maximum entropy over sequences of a fixed length l determines sufficient condition of the uniform convergence and corresponding estimates. But only average entropy H(l) behaviour determine necessary and sufficient condition of the uniform convergence. The condition is that H(l) / l (average entropy per symbol) should go to zero when the sequence length goes to infinity. If the condition does not hold then there exists a set of objects with non zero probability measure, such that almost all sequences of arbitrary finite length from this set may be divided in all possible ways by the rules of the class. One can easily see, that in this case overfitting is inevitable. Similar results are found for real dependencies instead of decision rules.

See Also:

Download slides icon Download slides: ida07_ljubljana_chervonenkis_alexey.pdf (376.0 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 kirk, September 30, 2021 at 12:08 p.m.:

It's nice to be able to watch these lectures long after the speaker has passed away, they provide the impression that the speaker still lives on through these lectures.

Write your own review or comment:

make sure you have javascript enabled or clear this field: