# קולוקוויום וסמינרים

כדי להצטרף לרשימת תפוצה של קולוקוויום מדעי המחשב, אנא בקר בדף מנויים של הרשימה.Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.

Academic Calendar at Technion site.

- Bioinformatics Forum
- BizTEC Forum
- ceClub
- CGGC Weekly Seminar
- Coding Theory Seminar
- Colloquia
- Haifux, Haifa Linux Club
- Hardware Security Seminar
- Pixel Club
- Theory Seminar

## קולוקוויום וסמינרים בקרוב

### Pixel Club: Computational Optimal Transport for Data Sciences

- דובר:
- גבריאל פיירה (CNRS)
- תאריך:
- יום שלישי, 18.12.2018, 11:30
- מקום:
- חדר 337, בניין טאוב למדעי המחשב

Optimal transport (OT) has become a fundamental mathematical tool at the interface between calculus of variations, partial differential equations and probability. It took however much more time for this notion to become mainstream in numerical applications. This situation is in large part due to the high computational cost of the underlying optimization problems. There is a recent wave of activity on the use of OT-related methods in fields as diverse as computer vision, computer graphics, statistical inference, machine learning and image processing. In this talk, I will review an emerging class of numerical approaches for the approximate resolution of OT-based optimization problems. This offers a new perspective for the application of OT in imaging sciences (to perform color transfer or shape and texture morphing) and machine learning (to perform clustering, classification and generative models in deep learning). More information and references can be found on the website of our book "Computational Optimal Transport" https://optimaltransport.github.io/

- תאריך:
- מקום:

### Towards Interpretable Deep Learning for Natural Language Processing

- דובר:
- Roy Schwartz - CS-Lecture
- תאריך:
- יום חמישי, 20.12.2018, 10:30
- מקום:
- חדר 337 טאוב.
- השתייכות:
- University of Washington and the Allen institute for AI.

Despite their superb empirical performance, deep learning models for natural language processing (NLP) are often considered black boxes, as relatively little is known as to what accounts for their success. This lack of understanding turns model development into a slow and expensive trial-and-error process, which limits many researchers from developing state-of-the-art models. Customers of deep learning also suffer from this lack of understanding, because they are using tools that they cannot interpret. In this talk I will show that many deep learning models are much more understandable than originally thought. I will present links between several deep learning models and classical NLP models: weighted finite-state automata. As the theory behind the latter is well studied, these findings allow for the development of more interpretable and better-performing NLP models. As a case study, I will focus on convolutional neural networks (ConvNets), one of the most widely used deep models in NLP. I will show that ConvNets are mathematically equivalent to a simple, linear chain weighted finite-state automaton. By uncovering this link, I will present an extension of ConvNets that is both more robust and more interpretable than the original model. I will then present similar observations regarding six recently introduced recurrent neural network (RNN) models, demonstrating the empirical benefits of these findings to the performance of NLP systems. This is joint work with Hao Peng, Sam Thomson and Noah A. Smith Bio: ====== Roy Schwartz is a postdoctoral researcher at the University of Washington and the Allen institute for AI. Roy's research focuses on improving deep learning models for natural language processing by gaining mathematical and linguistic understanding of these models. He received his Ph.D. and M.Sc. in Computer Science and his B.Sc. in Computer Science and Cognitive Science from the Hebrew University. Roy has won a best paper award at RepL4NLP 2018, as well as a Hoffman leadership and responsibility fellowship.

### Predicting a Better Future for Asynchronous SGD with DANA

- דובר:
- עידו חכימי, הרצאה סמינריונית למגיסטר
- תאריך:
- יום שני, 24.12.2018, 11:00
- מקום:
- טאוב 601
- מנחה:
- Prof. A. Schuster

Distributed training can significantly reduce the training time of neural networks. Despite its potential, however, distributed training has not been widely adopted due to the difficulty of scaling the training process. Existing methods suffer from slow convergence and low final accuracy when scaling to large clusters, and often require substantial re-tuning of hyper-parameters. We propose DANA, a novel approach that scales to large clusters while maintaining similar final accuracy and convergence speed to that of a single worker. DANA estimates the future value of model parameters by adapting Nesterov Accelerated Gradient to a distributed setting, and so mitigates the effect of gradient staleness, one of the main difficulties in scaling SGD to more workers. Evaluation on three state-of-the-art network architectures and three datasets shows that DANA scales as well as or better than existing work without having to tune any hyperparameters or tweak the learning schedule. For example, DANA achieves 75.73\% accuracy on ImageNet when training ResNet-50 with 16 workers, similar to the non-distributed baseline.

### COLLOQUIUM LECTURE - Toward human-centered programming language design

- דובר:
- Joshua Sunshine
- תאריך:
- יום שלישי, 1.1.2019, 14:30
- מקום:
- חדר 337 טאוב.
- השתייכות:
- Institute for Software Research at Carnegie Mellon University
- מארח:
- Roy Schwartz

Programming languages are a tool for human thought, expression, and work yet they are principally designed using mathematical and engineering techniques. In this talk, I will describe how our group has applied human-centered design techniques --- interviews, participatory design exercises, and qualitative analysis of developer forums --- in the design of three research programming systems (Plaid, Glacier, and Obsidian). I will speak frankly about the strengths and weaknesses of these approaches and discuss speculative new techniques. Short Bio: ========== Joshua Sunshine is a Systems Scientist in the Institute for Software Research at Carnegie Mellon University. He has broad research interests at the intersection of programming languages and software engineering. He is particularly interested in better understanding of the factors that influence the usability of reusable software components. He completed his Ph.D. in Software Engineering from Carnegie Mellon in December 2013. His dissertation focused on the usability of software libraries with ordering constraints (API protocols). He was advised by Jonathan Aldrich. He graduated from Brandeis University in 2004 and worked for almost four years as a software engineer before starting graduate school. ============================ Refreshments will be served from 14:15 Lecture starts at 14:30

### On Optimization and Expressiveness in Deep Learning

- דובר:
- Nadav Cohen - CS-Lecture
- תאריך:
- יום חמישי, 3.1.2019, 10:30
- מקום:
- חדר 337 טאוב.
- השתייכות:
- School of Mathematics, the Institute for Advanced Study

Understanding deep learning calls for addressing three fundamental questions: expressiveness, optimization and generalization. Expressiveness refers to the ability of compactly sized deep neural networks to represent functions capable of solving real-world problems. Optimization concerns the effectiveness of simple gradient-based algorithms in solving non-convex neural network training programs. Generalization treats the phenomenon of deep learning models not overfitting despite having much more parameters than examples to learn from. This talk will describe a series of works aimed at unraveling some of the mysteries behind optimization and expressiveness. I will begin by discussing recent analyses of optimization for deep linear neural networks. By studying the trajectories of gradient descent, we will derive the most general guarantee to date for efficient convergence to global minimum of a gradient-based algorithm training a deep network. Moreover, in stark contrast to conventional wisdom, we will see that, sometimes, gradient descent can train a deep linear network faster than a classic linear model. In other words, depth can accelerate optimization, even without any gain in expressiveness, and despite introducing non-convexity to a formerly convex problem. In the second (shorter) part of the talk, I will present an equivalence between convolutional and recurrent networks --- the most successful deep learning architectures to date --- and hierarchical tensor decompositions. The equivalence brings forth answers to various questions concerning expressiveness, resulting in new theoretically-backed tools for deep network design. Optimization works covered in this talk were in collaboration with Sanjeev Arora, Elad Hazan, Noah Golowich and Wei Hu. Expressiveness works were with Amnon Shashua, Or Sharir, Yoav Levine, Ronen Tamari and David Yakira. Bio ==== Nadav Cohen is a postdoctoral member at the School of Mathematics in the Institute for Advanced Study. His research focuses on the theoretical and algorithmic foundations of deep learning. In particular, he is interested in mathematically analyzing aspects of expressiveness, optimization and generalization, with the goal of deriving theoretically founded procedures and algorithms that will improve practical performance. Nadav earned his PhD at the School of Computer Science and Engineering in the Hebrew University of Jerusalem, under the supervision of Prof. Amnon Shashua. Prior to that, he obtained a BSc in electrical engineering and a BSc in mathematics (both summa cum laude) at the Technion Excellence Program for distinguished undergraduates. For his contributions to the theoretical understanding of deep learning, Nadav received a number of awards, including the Google Doctoral Fellowship in Machine Learning, the Rothschild Postdoctoral Fellowship, and the Zuckerman Postdoctoral Fellowship.

### Deep Anomaly Detection Using Geometric Transformations

- דובר:
- יצחק גולן, הרצאה סמינריונית למגיסטר
- תאריך:
- יום שני, 7.1.2019, 11:30
- מקום:
- טאוב 601
- מנחה:
- Prof. R. El-Yaniv

We consider the problem of anomaly detection in images, and present a new detection technique. Given a sample of images, all known to belong to a ``normal'' class (e.g., dogs), we show how to train a deep neural model that can detect out-of-distribution images (i.e., non-dog objects). The main idea behind our scheme is to train a multi-class model to discriminate between dozens of geometric transformations applied on all the given images. The auxiliary expertise learned by the model generates feature detectors that effectively identify, at test time, anomalous images based on the softmax activation statistics of the model when applied on transformed images. We present extensive experiments using the proposed detector, which indicate that our algorithm improves state-of-the-art methods by a wide margin.

### COLLOQUIUM LECTURE - Consolidating and Exploring Open Textual Knowledge

- דובר:
- Ido Dagan
- תאריך:
- יום שלישי, 15.1.2019, 14:30
- מקום:
- חדר 337 טאוב.
- השתייכות:
- Department of Computer Science, Bar Ilan University
- מארח:
- Roy Schwartz

Abstract: T B A

Short Bio:

Ido Dagan holds B.Sc. (Summa Cum Laude) and Ph.D. degrees in Computer Science from the Technion, Israel. He conducted his Ph.D. research in collaboration with the IBM Haifa Scientific Center, where he was a research fellow in 1991. During 1992-1994 he was a Member of Technical Staff at AT&T Bell Laboratories. During 1994-1998 he has been at the Department of Computer Science of Bar Ilan University, to which he returned in 2003. During 1998-2003 he was co-founder and CTO of a text categorization startup company, FocusEngine, and VP of Technology at LingoMotors, a Cambridge Massachusetts company which acquired FocusEngine.### Applying Machine Learning for Identifying Attacks at Run-time

- דובר:
- נורית דביר, הרצאה סמינריונית למגיסטר
- תאריך:
- יום שני, 28.1.2019, 10:00
- מקום:
- טאוב 601
- מנחה:
- Prof. O. Grumberg and Prof. S. Markovitch

With the increase in malicious activity over the Internet, it has become extremely important to build tools for automatic detection of such activity. There have been attempts to use machine learning to detect network attacks, but the difficulty in obtaining positive (attack) examples, led to using one-class methods for anomaly detection. In this work we present a novel framework for using multiclass learning to induce an attack detector that identifies attacks at run time.

We designed a network simulator that is used to produce network activity. The simulator includes an attacker that stochastically violates the normal activity, yielding positive as well as negative examples. We have also designed a set of features that withstand changes in the network topology. Given the set of tagged feature vectors, we can then apply a learning algorithm to produce a multiclass attack detector. Our framework allows the user to define a cost matrix for specifying the cost for each type of detection error (predicting some value for a run, when its real tag is another value). We tested our framework in a wide variety of network topologies and in different setups, including transfer learning and dynamic networks. In addition, we also referred to how to choose the router(s) that will act as monitor(s) and predict the label of a run. The presented framework will enable any organization to defend itself with an attack detector that is automatically adapted to its particular setting.