To join the email distribution list of the cs colloquia, please visit the list subscription page.
Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.
Academic Calendar at Technion site.
Auditorium 012, Floor 0
As networks become more programmable, they are increasingly built around flexible software components. While this programmability enables new functionality and faster innovation, it also makes network behavior harder to reason about. In this talk, I will present a research agenda that brings ideas from formal methods to programmable networks. In particular, I will present techniques that leverage programmable-network semantics for concurrency safety, traffic monitoring, and failure recovery. More broadly, this work illustrates how semantic foundations can help bring stronger correctness guarantees to modern networked systems.
Bio: Guy Amir is a Postdoctoral Researcher at Cornell University, conducting research at the intersection of formal methods, networking, and systems. He earned his Ph.D. in 2024 from the Hebrew University of Jerusalem, where he studied AI safety, focusing on formally verifying reactive AI systems and interpreting neural networks. He holds an M.Sc. in Computer Science and a B.Sc. in Computational Biology and Computer Science, both from the Hebrew University. He has received Rothschild, Fulbright, AI-Net, and Charles Clore fellowships, as well as an ICML Spotlight and KLA Award.
506, Zisapel Building
Joint Alignment (JA) aims to align a collection of images into a shared coordinate frame such that semantically corresponding features coincide spatially. Despite its importance in many vision applications, existing JA methods often rely on heavy optimization pipelines, large-capacity models, and extensive hyperparameter tuning, leading to long training times and limited scalability.
This talk presents FastJAM, a fast and lightweight joint alignment framework that reframes JA as a graph-based learning problem over sparse keypoints. FastJAM leverages pairwise correspondences from an off-the-shelf matcher and a graph neural network to efficiently predict per-image homography transformations, achieving state-of-the-art alignment quality while reducing runtime from minutes or hours to just seconds.
Link to Paper: https://bgu-cs-vil.github.io/FastJAM/
Omri Hirsch is an MSc. student in Computer Science at Ben-Gurion University of the Negev, conducting research in Computer Vision and Machine Learning in The Vision, Inference, and Learning (VIL) group under the supervision of Prof. Oren Freifeld. His research focuses on efficient geometric learning and joint image alignment, and he is the first author of FastJAM, recently accepted to NeurIPS 2025. He has previously worked on medical imaging and computational pathology in collaboration with Dr. Yonatan Winetraub’s lab at Stanford University, as well as on underwater computer vision and color restoration under Dr. Derya Akkaynak. Omri is a recipient of competitive scholarships for outstanding MSc. students in AI and Data Science for two consecutive years, and was awarded NeurIPS 2025 financial support in recognition of his research potential.
While zero-shot diffusion-based compression methods have seen significant progress in recent years, they remain notoriously slow and computationally demanding. We present an efficient zero-shot diffusion-based compression method that runs substantially faster than existing methods, while maintaining performance that is on par with the state-of-the-art techniques. Our method builds upon the recently proposed Denoising Diffusion Codebook Models (DDCMs) compression scheme. Specifically, DDCM compresses an image by sequentially choosing the diffusion noise vectors from reproducible random codebooks, guiding the denoiser's output to reconstruct the target image. We modify this framework with Turbo-DDCM, which efficiently combines a large number of noise vectors at each denoising step, thereby significantly reducing the number of required denoising operations. This modification is also coupled with an improved encoding protocol. Furthermore, we introduce two flexible variants of Turbo-DDCM, a priority-aware variant that prioritizes user-specified regions and a distortion-controlled variant that compresses an image based on a target PSNR rather than a target BPP. Comprehensive experiments position Turbo-DDCM as a compelling, practical, and flexible image compression scheme.
Taub 601
A "named page" is a memory page whose content originates from and is backed by a file. Because named pages are regularly read from and written to persistent storage, filesystems strive to preserve file content contiguity, thereby enabling sequential I/O, which can be much faster than random I/O. No analogous effort to preserve contiguity exists for "anonymous pages," which hold unnamed data such as stack or heap bytes. Consequently, swapping a region of anonymous pages in or out can be much slower than reading or writing a region of named pages.
We observe (1) that the main advantage of the existing swap mechanism is high swap area utilization, since any anonymous page can be placed at any offset within the swap file, so there is no fragmentation; but (2) that secondary storage is commonly underutilized, so the cost of random I/O may be unwarranted. We therefore propose "named swapping," which associates each anonymous region with its own (swap) file and thus benefits from the underlying filesystem's efforts to maintain contiguity, improving swap performance by up to an order of magnitude. A key challenge we address is anonymous pages shared across multiple regions due to fork-based copy-on-write.