Understanding how intelligence works and how it can be emulated in machines is an age old dream and arguably one of the biggest challenges in modern science. Learning, its principles, and computational implementations are at the very core of this endeavor. Only recently we have been able, for the first time, to develop  artificial intelligence systems that can solve complex tasks considered out of reach for decades. Modern cameras can recognize faces, and smart phones recognize people voice; car provided with cameras can detect pedestrians and ATM machines automatically read checks.  In most cases at the root of these success stories there are machine learning algorithms, that is, softwares that  are trained rather than programmed to solve a task. 

In this course, we focus on the fundamental approach to machine learning  based on regularization. We discuss  key concepts and techniques that allow to treat in a unified way a huge class of diverse approaches, while providing the tools to design new ones. Starting from classical notions of smoothness, shrinkage and margin, we cover state of the art techniques based on the concepts of geometry (e.g. manifold learning), sparsity, low rank, that allow to design algorithms for supervised learning, feature selection, structured prediction, multitask learning. Practical applications will be discussed.