Regularization Methods for High Dimensional Machine Learning 2014
Teachers:
Francesca Odone, DIBRIS - Univ. Genova
Lorenzo Rosasco, DIBRIS - Univ. Genova
Period:
7-11 July 2014
Location:
TBD
Number of hours:
20
Objectives:
Understanding how intelligence works and how it can be emulated in machines is an
age old dream and arguably one of the biggest challenges in modern science.
Learning, its principles, and computational implementations are at the very core of
this endeavor. Only recently we have been able, for the first time, to develop artificial
intelligence systems that can solve complex tasks considered out of reach for
decades. Modern cameras can recognize faces, and smart phones recognize
people voice; car provided with cameras can detect pedestrians and ATM machines
automatically read checks. In most cases at the root of these success stories there
are machine learning algorithms, that is, softwares that are trained rather than
programmed to solve a task.
In this course, we focus on the fundamental approach to machine learning based on
regularization. We discuss key concepts and techniques that allow to treat in
a unified way a huge class of diverse approaches, while providing the tools to design
new ones. Starting from classical notions of smoothness, shrinkage and margin, we
cover state of the art techniques based on the concepts of geometry (e.g. manifold
learning), sparsity, low rank, that allow to design algorithms for supervised learning,
feature selection, structured prediction, multitask learning. Practical applications will
be discussed.
Summary:
- Introduction to Machine Learning
- Kernels, Dictionaries and Regularization
- Regularization Networks and Support Vector Machines
- Spectral methods for supervised learning
- Sparsity-based learning
- Multiple kernel learning
- Manifold regularization
- Multi-task learning
- Applications to high dimensional problems and infinite objects.
Final exam:
Final project or Wikipedia-style article.