דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

event speaker icon
אליה טורנר (הרצאה סמינריונית למגיסטר)
event date icon
יום רביעי, 11.11.2020, 14:00
event location icon
הרצאה באמצעות זום: https://technion.zoom.us/j/96867387806
event speaker icon
מנחה: Prof. Omri. Barak
RNNs are a class of Machine learning models used to solve tasks with sequential structure. Since most neural circuits are recurrent, RNNs are often deployed to explain neural activity during computational tasks with a temporal dimension. Unlike their natural counterparts, an RNN’s entire computation is accessible. Yet, most works treat them as a black box and analyze their generated activity directly. A recent line of work uses RNNs as a hypothesis generation tool; The activity of a trained RNN is engineered backwards to reveal a low dimensional mechanism that may help explain the recorded activity. However, we still lack understanding of the hypothesis space itself. Recent work (Maheswaranathan et al 2019) proposed that the solutions to various canonical tasks are, from a topological perspective, widely universal. In our work, we uncover a more complex phenomenon. Analyzing a timing task (Ready-Set-Go) reveals that even identical settings can lead to qualitatively different solutions - both from behavioral and neuronal perspectives. With the goal of understanding the solution space, we cluster the solutions into discrete sets and characterize each. We draw a low-dimensional map of the solution space and sketch the training process as a trajectory in it. We suggest that the effects of updating parameters during training often takes form as a bifurcation in the governing discrete ODE, which results in a topological change in the dynamical mechanism and in the behavior. Moreover, we explore the question of nature vs. nurture - the effect of the initial weights vs. training set over the final solution and show that, in our setting, only the former has a meaningful impact over the learned solution.