Events
The Taub Faculty of Computer Science Events and Talks
Gabriel Stanovsky - CS-Lecture
Sunday, 05.01.2020, 10:30
Recent developments in Natural Language Processing (NLP) allow models to leverage large,
unprecedented amounts of raw text, culminating in impressive performance gains in many of
the field's long-standing challenges, such as machine translation, question answering, or
information retrieval.
In this talk, I will show that despite these advances, state-of-the-art NLP models often
fail to capture crucial aspects of text understanding. Instead, they excel by finding
spurious patterns in the data, which lead to biased and brittle performance. For example,
machine translation models are prone to translate doctors as men and nurses as women,
regardless of context. Following, I will discuss an approach that could help overcome
these challenges by explicitly representing the underlying meaning of texts in formal data
structures. Finally, I will present robust models that use such explicit representations
to effectively identify meaningful patterns in real-world texts, even when training data
is scarce.
Short Bio:
============
Gabriel Stanovsky is a postdoctoral researcher at the University of Washington and the
Allen Institute for AI in Seattle, working with Prof. Luke Zettlemoyer and Prof. Noah
Smith. He did his Ph.D. with Prof. Ido Dagan at Bar-Ilan University and his BSc and MSc at
Ben Gurion University, where he was advised by Prof. Michael Elhadad. He is interested in
developing text-processing models that exhibit facets of human intelligence with benefits
for users in real-world applications. His work has received awards at top-tier conferences
and workshops, including ACL and CoNLL.