Dan Alistarh (IST Austria)
Thursday, 17.9.2020, 11:30
Zoom Lecture: https://technion.zoom.us/j/96000595000
Machine learning has made considerable progress over the past decade, matching and even surpassing human performance on a varied set of narrow computational tasks. This progress has been enabled by the widespread availability of large datasets, as well as by improved algorithms and models. Distribution, implemented either through single-node concurrency or through multi-node parallelism, has been the third key ingredient to these advances.
The goal of this talk is to provide an overview of the role of distributed computing in machine learning, with an eye towards the intriguing trade-offs between synchronization and communication costs of distributed machine learning algorithms, on the one hand, and their convergence, on the other. The focus will be on parallelization strategies for the fundamental stochastic gradient descent (SGD) algorithm, and on the distributed mean estimation problem. Along the way, we will provide an overview of the ongoing research and open problems in distributed machine learning. The lecture will assume no prior knowledge of machine learning or optimization, beyond familiarity with basic concepts in algebra and analysis.
Dan Alistarh is currently an Assistant Professor at IST Austria. Previously, he was affiliated with ETH Zurich, Microsoft Research, and MIT. He received his PhD from the EPFL, under the guidance of Prof. Rachid Guerraoui. His research focuses on distributed algorithms and concurrent data structures, and spans from algorithms and lower bounds to practical implementations. He was awarded an ERC Starting Grant with a focus on distributed machine learning, and was recently a co-recipient of best paper awards at OPODIS19 and PPoPP20.