View order







Type of content

 
 
 
 
 
 

Language

 
 
 
 
 
 
 

Year

From:
To:

 


...Search a Keyword

 
 
event header image

Optimization

Optimization for Machine Learning

It is fair to say that at the heart of every machine learning algorithm is an optimization problem. It is only recently that this viewpoint has gained significant following. Classical optimization techniques based on convex optimization have occupied center-stage due to their attractive theoretical properties. But, new non-smooth and non-convex problems are being posed by machine learning paradigms such as structured learning and semi-supervised learning. Moreover, machine learning is now very important for real-world problems which often have massive datasets, streaming inputs, and complex models that also pose significant algorithmic and engineering challenges. In summary, machine learning not only provides interesting applications but also challenges the underlying assumptions of most existing optimization algorithms. Therefore, there is a pressing need for optimization "tuned" to the machine learning context. For example, techniques such as non-convex optimization (for semi-supervised learning), combinatorial optimization and relaxations (structured learning), non-smooth optimization (sparsity constraints, L1, Lasso, structure learning), stochastic optimization (massive datasets, noisy data), decomposition techniques (parallel and distributed computation), and online learning (streaming inputs) are relevant in this setting. These techniques naturally draw inspiration from other fields, such as operations research, theoretical computer science, and the optimization community. Motivated by these concerns, we would like to address these issues in the framework of this workshop.


The Workshop homepage can be found at http://opt.kyb.tuebingen.mpg.de/


Categories

Write your own review or comment:

make sure you have javascript enabled or clear this field: