Multi-Task Learning and Matrix Regularization
published: Jan. 19, 2010, recorded: December 2009, views: 443
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Multi-task learning extends the standard paradigm of supervised learning. In multi-task learning, samples for multiple related tasks are given and the goal is to learn a function for each task and also to generalize well (transfer learned knowledge) on new tasks. The applications of this paradigm are numerous and range from computer vision to collaborative filtering to bioinformatics while it also relates to vector valued problems, multiclass, multiview learning etc. I will present a framework for multi-task learning which is based on learning a common kernel for all tasks. I will also show how this formulation connects to the trace norm and group Lasso approaches. Moreover, the proposed optimization problem can be solved using an alternating minimization algorithm which is simple and efficient. It can also be "kernelized" by virtue of a multi-task representer theorem, which holds for a large family of matrix regularization problems and includes the classical representer theorem as a special case.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !