On Optimal Estimators in Learning Theory

author: Vladimir Temlyakov, University of South Carolina
published: Feb. 25, 2007,   recorded: May 2005,   views: 3623
Categories

See Also:

Download slides icon Download slides: temylakov_ChicW_pr.pdf (88.2 KB)


Help icon Streaming Video Help

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

This talk addresses some problems of supervised learning in the setting formulated by Cucker and Smale. Supervised learning, or learning from examples, refers to a process that builds on the base of available data of inputs xi and outputs yi, i=1,...,m, a function that best represents the relation between the inputs x in X and the corresponding outputs y in Y. The goal is to find an estimator fz on the base of given data z := ((x1,y1),...,(xm,ym)) that approximates well the regression function fp defined on Z=X x Y. We assume that (xi,yi), i=1,...,m are independent and distributed according to p. There are several important ingredients in the mathematical formulation of this problem. We follow the way that has become standard in approximation theory and has been used in recent papers. In this approach we first choose a function class W (a hypothesis space H) to work with. After selecting a class W we have the following two ways to go. The first one is based on the idea of studying approximation of the L2(px) projection fW := (fp)W of fp onto W. Here, px is the marginal probability measure. This setting is known as the improper function learning problem or the projection learning problem. In this case we do not assume that the regression function fp comes from a specific (say, smoothness) class of functions. The second way is based on the assumption fp in W. This setting is known as the proper function learning problem. For instance, we may assume that fp has some smoothness. We will give some upper and lower estimates in both settings.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: