אירועים
אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב
יום ראשון, 12.01.2020, 10:30
This talk presents new computational and statistical barriers in machine learning, along
with the algorithmic developments that they inspire.
The computational barriers arise in nonconvex optimization: we prove lower bounds on the
(oracle) complexity of finding stationary points using (stochastic) gradient methods,
showing that gradient descent is unimprovable for a natural class of problems. We bypass
this barrier by designing an algorithm that outperforms gradient descent for a large
subclass of problems with high-order smoothness. Our algorithm leverages classical
momentum techniques from convex optimization using a ''convex until proven guilty''
principle that we develop.
The statistical barrier is the large amount of data required for adversarially robust
learning. In a Gaussian model, we prove that unlabeled data allows us to circumvent an
information theoretic gap between robust and standard classification. Our analysis
directly leads to a general robust self-training procedure; we use it to significantly
improve state-of-the-art performance on the challenging and extensively studied CIFAR-10
adversarial robustness benchmark.
Short Bio:
============
Yair Carmon is a PhD student at Stanford University working with Prof. John Duchi and
Prof. Aaron Sidford. His research focuses on understanding and overcoming the fundamental
limits of machine learning; specific interests include nonconvex optimization, efficient
algorithms for matrix-structured data, and robust machine learning. Yair received an M.Sc.
from the Technion in 2015, under the supervision of Prof. Shlomo Shamai and Prof. Tsachy
Weissman. Between 2009 and 2015 he held several positions as an algorithm engineer and
research team leader, working on communications, signal processing and computer vision.
His awards include the Stanford Graduate Fellowship, the Numerical Technologies
Fellowship, and membership in the Technion Excellence Program.