Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization

author: Satyen Kale, Yahoo! Research
published: Aug. 2, 2011,   recorded: July 2011,   views: 3604
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

We give a novel algorithm for stochastic strongly-convex optimization in the gradient oracle model which returns an O(1/T)-approximate solution after T gradient updates. This rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O(log(T)/T), which was obtained by applying an online strongly-convex optimization algorithm with regret O(log(T)) to the batch setting. We complement this result by proving that any algorithm has expected regret of Ω(log(T)) in the online stochastic strongly-convex optimization setting. This lower bound holds even in the full-information setting which reveals more information to the algorithm than just gradients. This shows that any online-to-batch conversion is inherently suboptimal for stochastic strongly-convex optimization. This is the first formal evidence that online convex optimization is strictly more difficult than batch stochastic convex optimization.

See Also:

Download slides icon Download slides: colt2011_kale_optimization_01.pdf (1.2 MB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: