דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

event speaker icon
מחמוד שריף (אונ' קרנגי מלון)
event date icon
יום רביעי, 17.06.2020, 10:30
event location icon
הרצאה באמצעות זום: https://technion.zoom.us/j/93035147116
In the first part of this two-part talk, I will describe my work in adversarial machine learning (ML). Prior work has demonstrated that ML algorithms are vulnerable to evasion attacks at inference time by so-called adversarial examples. However, the implications of attacks in practice, under real-world constraints, remained largely unknown. To fill the gap, we propose evasion attacks that satisfy multiple objectives (e.g., smoothness and robustness against changes in imaging conditions), and show that these attacks pose a practical threat to ML-based systems. For example, we demonstrate that attackers can realize eyeglasses they can don to evade face recognition or impersonate other individuals. To produce attacks systematically, we develop a framework that can accommodate a wide range of desired objectives, including ones that elude precise specification (e.g., attacks' inconspicuousness). In a complementary line of work, we propose n-ML---a defense that trains an ensemble of n classifiers to classify inputs by a vote. Unlike other ensemble-based approaches, n-ML trains each classifier to classify adversarial examples differently than other classifiers, thus rendering it difficult for adversarial examples to obtain enough votes to be misclassified. We show that n-ML outperforms state-of-the-art defenses in several settings.

In the second part of the talk, I will describe my work in human factors in security and privacy. Understanding users (their behaviors, perceptions, preferences, ...) is helpful for developing user-centered defenses that would be widely adopted and used as intended, thus enhancing users' security and privacy. In my research, I employ methods from data science and social sciences to draw insights from datasets of various sizes and inform how to improve defenses. In particular, I seek to develop more effective, less intrusive defenses by personalizing them to individual users. I present two proof-of-concept defenses that leverage users' behavior, preferences, and available context for personalization---one to predict whether users browsing the Internet would be exposed to malicious content, and another to determine users' comfort with online tracking.