דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

צמצום הפיקוח במשימות זיהוי חזותיות
event speaker icon
יבגני ז'לטונוזסקי (הרצאה סמינריונית למגיסטר)
event date icon
יום שלישי, 26.10.2021, 14:30
event location icon
Zoom Lecture: 5447249519 and Taub 014
event speaker icon
מנחה: Prof. A. Mendelson, Prof. A. Bronstein, Dr. C. Baskin
While deep neural networks (DNNs) have shown tremendous success across various computer vision tasks, including image classification, object detection, and semantic segmentation, requirements for a large number of high-quality labels obstruct the adoption of DNNs in real-life problems. Lately, researchers have proposed multiple approaches for reducing requirements to the amount or quality of these labels or even working in a fully unsupervised way. In a series of works, we study different approaches to supervision reduction in visual recognition tasks: self-supervised learning, learning with noisy labels, and semi-supervised learning. For self-supervised learning, we show that dimensionality reduction followed by simple k-nearest neighbors clustering is a very strong baseline for fully unsupervised large-scale classification (ImageNet). Additionally, we present a learning with noisy labels framework comprising two stages: self-supervised pre-training and robust fine-tuning. The framework, dubbed "Contrast to Divide" (C2D), significantly outperforms prior art on synthetic and real-life noise, showing state-of-the-art performance with different methods and pre-training approaches. Furthermore, since self-supervised pre-training is unaffected by label noise, C2D is especially efficient in a high noise regime. Finally, for semi-supervised learning, we propose a simple weighting scheme that reduces confirmation bias among unlabeled samples and, as a result, outperforms existing methods on different datasets and a wide range of labeled sample fractions. The talk will be given in English.