דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

event speaker icon
אור הכהן (האונ' העברית בירושלים)
event date icon
יום שלישי, 28.07.2020, 11:30
event location icon
הרצאה באמצעות זום: https://technion.zoom.us/j/93489482905
We report a series of robust empirical observations, whereby deep Neural Networks learn the examples in both the training and test sets in a similar order. This phenomenon is observed in all the commonly used benchmarks we evaluated, including many image classification benchmarks, and one text classification benchmark. While this phenomenon is strongest for models of the same architecture, it also crosses architectural boundaries -- models of different architectures start by learning the same examples, after which the more powerful model may continue to learn additional examples. We further show that this pattern of results reflects the interplay between the way neural networks learn and benchmark datasets. Thus, when fixing the architecture, we show synthetic datasets where this pattern ceases to exist. When fixing the dataset, we show that other learning paradigms may learn the data in a different order. We hypothesize that our results echo how neural networks discover structure in natural datasets.

Short Bio:
Did a bachelor's degree in computer and cognitive sciences at the Hebrew University, then joined the Edmond and Lily Safra center for brain (ELSC) graduate program. Currently doing my Ph.D. research at Daphna Weinshall's lab, researching deep learning and computer vision.