אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב
יום שלישי, 13.12.2022, 14:30
It is standard practice in deep learning to train large models on relatively small datasets. This can potentially lead to severe overfitting, but more often than not, test error is actually good. This phenomenon has prompted research on the so-called "Implicit Bias of Deep Learning Algorithms". Here I will discuss our recent works on multiple novel facets of this bias, and present theoretical and empirical results in different settings. In particular, I will discuss analysis of implicit bias in fine-tuning of large models, in learning temporal models (e.g., RNNs) and in labeling images with very few examples.
Prof. Globerson received his BSc in computer science and physics in 1997 from the Hebrew University, and his PhD in computational neuroscience from the Hebrew University in 2006. After his PhD, he was a postdoctoral fellow at the University of Toronto and a Rothschild postdoctoral fellow at MIT. He joined the Hebrew University school of computer science in 2008, and moved to the Tel Aviv University School of Computer Science in 2015. He was an associate editor for the Journal of Machine Learning Research, and the Associate Editor in Chief for the IEEE Transactions on Pattern Analysis and Machine Intelligence. His work has received several paper awards (at NIPS, UAI, and ICML). He also serves as Research Scientist at Google in Tel Aviv. In 2018 he served as program co-chair for the UAI conference, and in 2019 he was the general co-chair for UAI in Tel Aviv. In 2019 he received the ERC consolidator grant.
Host: Nir Rosenfeld.