Chaim Baskin, Ph.D. Thesis Seminar
For password to lecture, please contact: firstname.lastname@example.org
Advisor: Prof. Alex Bronstein and Prof. Avi Mendelson
Deep neural networks (DNN) became a common tool for solving complex tasks in various fields such as computer vision, natural language processing, and recommendation systems. Despite recent progress made in enhancing the DNN performance, there are still two major obstacles hindering the practicality of DNNs in some application: their energy-expensive deployment on embedded platforms, and their amenability to malicious adversarial perturbations. In this talk, I will overview several lines of works tackling different aspects of both problems. The first presents two training-aware and post-training quantization approaches making the DNNs parameters and feature maps represented in fixed low-bit representation. The second introduces two entropy coding-based methods for the reduction of inference-time memory bandwidth requirements; the first method does not require any fine-tuning, while the second includes a fine-tuning stage and in exchange provides significant further bandwidth reduction with negligible additional complexity or accuracy reduction. I will also present a simple framework that helps design efficient hardware for quantized neural networks. I will show how quantization techniques can inspire new approaches to better coping with adversarial attacks and demonstrate how an adversarially pre-trained classifier could boost adversarial robustness by smoothing between different levels of input noise. Finally, I will introduce a simple single-node minimal attribute changing perturbation that can attack social graph-based DNNs, in a significantly more harmful way than the previously studied edge-based attacks.