Sparse Exponential Weighting and Langevin Monte-Carlo

author: Alexandre Tsybakov, Université Pierre et Marie Curie - Paris 6
published: May 6, 2009,   recorded: April 2009,   views: 286

See Also:

Download slides icon Download slides: smls09_tsybakov_sewalmc_01.pdf (780.8 KB)

Help icon Streaming Video Help

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography


The performance of statistical estimators in several scenarios, such as adaptive nonparametric estimation, aggregation of estimators and estima- tion under the sparsity constraint can be assessed in terms of sparsity oracle inequalities (SOI) for the prediction risk. One of the challenges is to find estimators that attain the sharpest SOI under minimal assumptions on the dictionary. Methods of estimation adapted to the sparsity scenario like the Lasso, the Dantzig selector or their modifications, can be easily realized for very large dimensions of the problem but their performance is conditioned by severe restrictions on the dictionary. Such methods fail when the ele- ments of the dictionary are not approximately non-correlated. This is some- what unsatisfactory, since it is known that the BIC method enjoys better SOI without any assumption on the dictionary. However, the BIC method is NP-hard. This talk will focus on Sparse Exponential Weighting, a new tech- nique of sparse recovery aiming to realize a compromise between theoretical optimality and computational efficiency. The method is based on aggrega- tion with exponential weights using a heavy-tailed sparsity favoring prior. The theoretical performance of Sparse Exponential Weighting in terms of SOI is comparable with that of the BIC and is even better in some aspects. No assumption on the dictionary is needed. At the same time, we show that the method is computationally feasible for relatively large dimensions of the problem. We prove that Langevin Monte-Carlo (LMC) algorithms can be successfully used for computing Sparse Exponential Weighting estimators. Numerical experiments confirm fast convergence properties of the LMC and demonstrate nice performance of the resulting estimators. This is a joint work with Arnak Dalalyan.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: