אירועים
אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב
יום רביעי, 17.04.2019, 11:30
חדר 861, בניין מאייר, הפקולטה להנדסת חשמל
Modern machine learning models exhibit super-human accuracy on tasks from image classification to natural-language processing, but accuracy does not tell the entire story of what these models have learned. Does a model memorize and leak its training data? Does it contain hidden backdoor functionality? In this talk, I will explain why common metrics of model quality may hide potential security and privacy vulnerabilities, and outline recent results and open problems at the junction of machine learning and privacy research.
Bio:
Vitaly Shmatikov is a professor of computer science at Cornell Tech, where he works on computer security and privacy. Vitaly's research group received the Caspar Bowden PET Award for Outstanding Research in Privacy Enhancing Technologies in 2008, 2014, and 2018, and multiple best-paper awards. Prior to joining Cornell, Vitaly worked as a faculty member at the University of Texas at Austin and computer scientist at SRI International.