Follow the leader if you can, Hedge if you must
published: Oct. 6, 2014, recorded: December 2013, views: 33
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In sequential prediction with expert advice, the intuitive Follow-the-Leader (FTL) algorithm predicts like the expert that has performed best in the past. Although this algorithm performs very well on stochastic data or in other `easy' cases where there are few leader changes, it is known to fail dramatically when the data are adversarial. On the other hand, other algorithms (like exponential weights with conservative parameter tuning) perform well on adversarial data, but learn much slower than FTL when we are indeed lucky enough to encounter the 'easy' data for which FTL performs well. This observation raises the question of whether it is possible to get the best of both worlds: is there a single algorithm that is robust to adversarial data, but exploits `easy' data to learn faster when possible?
I will discuss how previous methods that satisfy so-called second-order bounds get us part of the way: they work both for adversarial and for stochastic data, but do not exploit other `easy' data for which FTL works well. Then I will present a new method, called FlipFlop, which satisfies the same second-order bounds as previous methods, but in addition is provably as good as FTL on any data.
This is joint work with Steven de Rooij, Peter Grünwald and Wouter Koolen. The paper "Follow the Leader If You Can, Hedge If You Must" is available from www.timvanerven.nl/publications.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !