דלג לתוכן (מקש קיצור 's')
אירועים

קולוקוויום וסמינרים

כדי להצטרף לרשימת תפוצה של קולוקוויום מדעי המחשב, אנא בקר ב דף מנויים של הרשימה.

Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.
Academic Calendar at Technion site.

קולוקוויום וסמינרים בקרוב

event head separator גרף טרנספורמר בסיבוכיות לינארית עם חלוקת צמתים אדפטיבית
event speaker icon
תומר בורידה (הרצאה סמינריונית למגיסטר)
event date icon
יום ראשון, 18.01.2026, 11:00
event location icon

טאוב 401

event speaker icon
מנחה:  דר. אור ליטני

We present ReHub, a novel graph transformer architecture that achieves linear complexity through an efficient reassignment technique between nodes and virtual nodes. Graph transformers have become increasingly important in graph learning for their ability to utilize long-range node communication explicitly, addressing limitations such as oversmoothing and oversquashing found in message-passing graph networks. However, their dense attention mechanism scales quadratically with the number of nodes, limiting their applicability to large-scale graphs. ReHub draws inspiration from the airline industry's hub-and-spoke model, where flights are  assigned to optimize operational efficiency.

In our approach, graph nodes (spokes) are dynamically reassigned to a fixed number of virtual nodes (hubs) at each model layer. Recent work, Neural Atoms, has demonstrated impressive and consistent improvements over GNN baselines by utilizing such virtual nodes; their findings suggest that the number of hubs strongly influences performance. However, increasing the number of hubs typically raises complexity, requiring a trade-off to maintain linear complexity.

Our key insight is that each node only needs to interact with a small subset of hubs to achieve linear complexity, even when the total number of hubs is large. To leverage all hubs without incurring additional computational costs, we propose a simple yet effective adaptive reassignment technique based on hub-hub similarity scores, eliminating the need for expensive node-hub computations.

Our experiments on long-range graph benchmarks indicate a consistent improvement in results over the base method, Neural Atoms, while maintaining a linear complexity instead of O(n^3/2). Remarkably, our sparse model achieves performance on par with its non-sparse counterpart. Furthermore, ReHub outperforms competitive baselines and consistently ranks among the top performers across various benchmarks.

event head separator קטיף אוטומטי של זעפרן
event speaker icon
תום אגמי (הרצאה סמינריונית למגיסטר)
event date icon
יום שלישי, 20.01.2026, 11:30
event location icon

טאוב 401

event speaker icon
מנחה:  פרופ' אלפרד ברוקשטיין

We introduce a vision-guided robotic system for automated saffron flower harvesting. Using camera-based perception and robotic manipulation, the system detects and cuts whole flowers while preserving their stigmas and avoiding plant damage.

event head separator תופעות של זיכרון וביטול-למידה במודלי שפה דרך עדשת נופי ההפסד כפונקציה בקלט
event speaker icon
לירן כהן (הרצאה סמינריונית למגיסטר)
event date icon
יום שני, 26.01.2026, 11:31
event location icon

טאוב 601

event speaker icon
מנחה:  פרופ' אבי מנדלסון

Understanding how large language models store, retain, and remove knowledge is critical for interpretability, reliability, and compliance with privacy regulations.
My work introduces a geometric perspective on memorization and unlearning by analyzing loss behavior over semantically similar inputs through the Input Loss Landscape.

I show that retained, forgotten, and unseen examples exhibit distinct patterns that reflect active learning, suppressed knowledge, and ignored information. 
Building on this observation, I propose REMIND (Residual Memorization In Neighborhood Dynamics), a black-box framework for diagnosing residual memorization. I further introduce a new semantic neighbor generation method that enables controlled exploration of local loss geometry.

These contributions provide interpretable insights into knowledge retention and forgetting, and offer practical tools for auditing, debugging, and enhancing transparency in large language models.