דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

רשתות נירונים לרצפים: מבט פורמלי
event speaker icon
גייל וויס (הרצאה סמינריונית לדוקטורט)
event date icon
יום רביעי, 19.10.2022, 16:00
event location icon
הרצאת זום: 99357013274 וטאוב 601
event speaker icon
מנחה: Prof. Eran Yahav and Prof. Yoav Goldberg
Neural sequence models (NSMs) - neural networks adapted specifically for the task of processing input sequences - have emerged as powerful tools in sequence processing, with the current most popular architectures being transformers and RNN variants. But what is a trained network really doing? In this talk we will approach this question, starting from the question of what a network *can* do, and progressing to the question of what a trained network *has* learned in practice. Specifically, we will begin by discussing the mechanisms that different RNN architectures can implement, and how these affect their ability to express different formal languages. We will then move this discussion to transformers, for which we must introduce RASP - a symbolic abstraction of their behaviour. Finally, we will discuss how a given trained RNN can be converted to a smaller, more interpretable, model - and how the process uncovers cases where seemingly perfect networks have not learned their intended task!