אירועים
אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב
Yonatan Belinkov - CS-Lecture
יום שני, 31.12.2018, 10:30
Deep learning has become pervasive in everyday life, powering language
applications like Apple's Siri, Amazon's Alexa, and Google Translate.
The inherent limitation of these deep learning systems, however, is that
they often function as a ''black box'', preventing researchers and users
from discerning the roles of different components and what they learn
during the training process.
In this talk, I will describe my research on interpreting deep learning
models for language along three lines. First, I will present a
methodological framework for investigating how these models capture
various language properties. The experimental evaluation will reveal a
learned hierarchy of internal representations in deep models for machine
translation and speech recognition. Second, I will demonstrate that
despite their success, deep models of language fail to deal even with
simple kinds of noise, of the type that humans are naturally robust to.
I will then propose simple methods for improving their robustness to
noise. Finally, I will turn to an intriguing problem in language
understanding, where dataset biases enable trivial solutions to complex
language tasks. I will show how to design models that are more robust to
such biases, and learn less biased latent representations.
Short Bio:
==========
Yonatan Belinkov is a Postdoctoral Fellow at the Harvard School of
Engineering and Applied Sciences (SEAS) and the MIT Computer Science and
Artificial Intelligence Laboratory (CSAIL). His research interests focus
on interpreting language representations in neural network models, with
applications in machine translation and speech recognition. He received
PhD and SM degrees from MIT in 2018 and 2014, and prior to that a BSc in
Mathematics and an MA in Arabic Studies, both from Tel Aviv University.
He received a Harvard Mind Brain Behavior Postdoctoral Fellowship.