Events
The Taub Faculty of Computer Science Events and Talks
Thursday, 21.12.2017, 10:30
Machine learning has recently been revolutionized by the
introduction of Deep Neural Networks. However, from a theoretical
viewpoint these methods are still poorly understood. Indeed the key
challenge in Machine Learning today is to derive rigorous results for
optimization and generalization in deep learning. In this talk I will
present several tractable approaches to training neural networks. At the
second part I will discuss a new sequential algorithm for decision
making that can take into account the structure in the action space and
is more tuned with realistic decision making scenarios.
I will present our work that provides some of the first positive results
and yield new, provably efficient, and practical algorithms for training
certain types of neural networks. In a second work I will present a new
online algorithm that learns by sequentially sampling random networks
and asymptotically converges, in performance, to the optimal network.
Our approach improves on previous random features based learning in
terms of sample/computational complexity, and expressiveness. In a more
recent work we take a different perspective on this problem. I will
provide sufficient conditions that guarantee tractable learning, using
the notion of refutation complexity. I will then discuss how this new
idea can lead to new interesting generalization bounds that can
potentially explain generalization in settings that are not always
captured by classical theory.
In the setting of reinforcement learning I will present a recently
developed new algorithm for decision making in a metrical action space.
As an application, we consider a dynamic pricing problem in which a
seller is faced with a stream of patient buyers. Each buyer buy at the
lowest price in a certain time window. We use our algorithm to achieve
an optimal regret, improving on previously known regret bound.
Short Bio:
==========
Roi Livni is a research instructor at the computer science department in
Princeton University. He is a recipient of the Eric and Wendy Schmidt
fellowship for strategic innovation, and a Yad Hanadiv fellow for
postdoctoral position.
He completed his PhD at the Hebrew University, during which he was a
recipient of a Google Europe fellowship in learning theory. He also
served, during his graduate studies, as a long term intern at Microsoft
Research. His graduate work won a Best paper award in ICML'13, and a
best student paper award in COLT'13.