Skip to content (access key 's')
Logo of Technion
Logo of CS Department
Logo of CS4People
Events

Colloquia and Seminars

To join the email distribution list of the cs colloquia, please visit the list subscription page.

Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.
Academic Calendar at Technion site.

Upcoming Colloquia & Seminars

event head separator Theory Seminar: Deterministic Online embedding of metrics
event speaker icon
Ilan Newman (University of Haifa)
event date icon
Wednesday, 20.03.2024, 13:15
event location icon
Taub 201
A finite metric space $(X,d)$ on a set of points $X$ is just the shortest path metric $d$ on a positively weighted graph $G=(X,E)$. In the online setting, the vertices of the input finite metric space $(X,d)$ are exposed one by one, together with their distances $d(*,*)$

to the previously exposed vertices. The goal is to embed (map) $X$ into a given host metric space $(H,d_H)$ (finite or not) and so to distort the distances as little as possible (distortion is the worst case ratio of how the distance is expanded, assuming it is never contracted).

I will start by a short survey on the main existing results of offline embedding into $ell_1, ell_2, ell_{infty}$ spaces. Then will present some results on online embedding: mainly lower bounds (on the best distortion) for small dimensional spaces, and some upper bounds for 2-dim Euclidean spaces.

As an intriguing question: the "rigid" $K_5$ metric space is a metric space on $G=K_5$, in which each edge should be thought as a unit interval $[0,1]$. The points are the $5$ fixed vertices of $K_5$ (that are exposed first) in addition to $n$ points that are arbitrarily placed anywhere in the edges, and exposed one by one. It is easy to show that an offline embedding into the $2$-dim Euclidean results in a distortion $Omega(n)$. What can be achieved in the online case ?? It was "believed" that an exponential distortion could be proven. We show that the distortion is bounded by $O(n^2)$.

The talk is based on joint work with Noam Licht and Yuri Rabinovich
event head separator Pixel-Club: Explaining Classification by Image Decomposition
event speaker icon
Elnatan Kadar (Graduate Seminar)
event date icon
Wednesday, 20.03.2024, 14:30
event location icon
Room 1061, EE Meyer Building
We propose a new way to explain and to visualize neural network classification through a decomposition-based explainable AI. Instead of providing an explanation heatmap, our method yields a decomposition of the image into class-agnostic and class-distinct parts, with respect to the data and chosen classifier. Following a fundamental signal processing paradigm of analysis and synthesis, the original image is the sum of the decomposed parts. We thus obtain a radically different way of explaining classification. The class-agnostic part ideally is composed of all image features which do not posses class information, where the class-distinct part is its complementary. This new visualization can be more helpful and informative in certain scenarios, especially when the attributes are dense, global and additive in nature, for instance, when colors or textures are essential for class distinction. M.Sc. student under the supervision of Pro. Guy Gilboa.
event head separator The Altru-Egoistic Approach to Collaborative Caching
event speaker icon
Amir Dachbash (M.Sc. Thesis Seminar)
event date icon
Thursday, 21.03.2024, 10:30
event location icon
Zoom Lecture:93387090038
event speaker icon
Advisor:  Prof. Roy Friedman
In this lecture we will explore collaborative caching algorithms in order to boost the effectiveness of caches in a distributed storage system. I'll introduce a scheme that partitions each node’s cache into two conceptual regions: an egoistic area whose goal is to contain the most valuable data for the node that owns the cache, and an altruistic area whose goal is to contain the most valuable data for the system as a whole. Each node’s division between these two regions is dynamically adjusted locally.

We introduce a family of algorithms that analyze cross-nodes statistics to decide how much memory to allocate to each partition in each node. We study the behavior of these algorithms through simulations of both synthetic workloads as well as multiple real traces from several sources. These simulations demonstrate the improvement in cache hit ratio for the entire system compared to state-of-the-art
event head separator Marrying Vision and Language: A Mutually Beneficial Relationship?
event speaker icon
Hadar Averbuch-Elor, Tel-Aviv University
event date icon
Tuesday, 02.04.2024, 14:30
event location icon
Taub 337
Foundation models that connect vision and language have recently shown great promise for a wide array of tasks such as text-to-image generation. Significant attention has been devoted towards utilizing the visual representations learned from these powerful vision and language models. In this talk, I will present an ongoing line of research that focuses on the other direction, aiming at understanding what knowledge language models acquire through exposure to images during pretraining. We first consider in-distribution text and demonstrate how multimodally trained text encoders, such as that of CLIP, outperform models trained in a unimodal vacuum, such as BERT, over tasks that require implicit visual reasoning. Expanding to out-of-distribution text, we address a phenomenon known as sound symbolism, which studies non-trivial correlations between particular sounds and meanings across languages, and demonstrate the presence of this phenomenon in vision and language models such as CLIP and Stable Diffusion. Our work provides new angles for understanding what is learned by these vision and language foundation models, offering principled guidelines for designing models for tasks involving visual reasoning.

Short Bio: Hadar Averbuch-Elor is an Assistant Professor at the School of Electrical Engineering in Tel Aviv University. Before that, Hadar was a postdoctoral researcher at Cornell-Tech, working with Noah Snavely. She completed her PhD in Electrical Engineering at Tel-Aviv University, where she was advised by Daniel Cohen-Or. Hadar is a recipient of multiple awards including the Zuckerman Postdoctoral Scholar Fellowship, the Schmidt Postdoctoral Award for Women in Mathematical and Computing Sciences, and the Alon Scholarship. She was also selected as a Rising Star in EECS by UC Berkeley. Hadar's research interests lie in the intersection of computer graphics and computer vision, particularly in combining pixels with more structured modalities, such as natural language and 3D geometry.
event head separator Learning with visual foundation models for Gen AI
event speaker icon
Gal Chechik, Bar-Ilan University and NVIDIA
event date icon
Thursday, 04.04.2024, 10:30
event location icon
Taub 337
Between training and inference, lies a growing class of AI problems that involve fast optimization of a pre-trained model for a specific inference task. These are not pure “feed-forward” inference problems applied to a pre-trained model, because they involve some non-trivial inference-time optimization beyond what the model was trained for; neither are they training problems, because they focus on a specific input. These compute-heavy inference workflows raise new challenges in machine learning and open opportunities for new types of user experiences and use cases.

In this talk, I describe two main flavors of the new workflows in the context of text-to-image generative models: few-shot fine-tuning and inference-time optimization. I'll cover personalization of vision-language models using textual-inversion techniques, and techniques for model inversion, prompt-to-image alignment and consistent generation. I will also discuss the generation of rare classes, and future directions.

Short Bio: Gal Chechik is a Professor of computer science at Bar-Ilan University and a senior director of AI research at NVIDIA. His current research focuses on learning for reasoning and perception. In 2018, Gal joined NVIDIA to found and head nvidia's research in Israel. Prior to that, Gal was a staff research scientist at Google Brain and Google research developing large-scale algorithms for machine perception, used by millions daily. Gal earned his PhD in 2004 from the Hebrew University, and completed his postdoctoral training at Stanford CS department. Gal authored ~130 refereed publications, ~50 patents, including publications in Nature Biotechnology, Cell and PNAS. His work won awards at ICML and NeurIPS.