כדי להצטרף לרשימת תפוצה של קולוקוויום מדעי המחשב, אנא בקר ב דף מנויים של הרשימה.
Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.
Academic Calendar at Technion site.
חדר 601
When modifying an existing codebase to handle new functionality, programmers will often debug the program until the insertion point for the new code.
This method, termed Debugging into Existence, helps programmers familiarize themselves with the surrounding code and runtime state. Despite its real-world usage, it is limited by the inability to test potential code past the first time the location is called, since added functionality would change the future state making it irrelevant.
Prior work has pioneered Live Execution over partial programs, with extensions using the provided values for synthesis by Programming by Example. In this work, we present DeSynt, a debugger extension that integrates live execution and program synthesis to extend the Debugging into Existence interaction model. DeSynt grants programmers meaningful run-time information across many executions, by allowing them to manipulate program state according to the desired functionality. Based on the state provided by the programmer, DeSynt then synthesizes programs that capture this functionality. We evaluated DeSynt in a between-subjects study on 10 users, and found that in tasks that do not involve complex fault localization, DeSynt reduces time to completion and concentrates programmer effort into fewer code locations. In addition, we found that users that used DeSynt spent more of their task time debugging, indicating DeSynt supports Debugging into Existence for those that already use it.
טאוב 301
Artificial Intelligence (AI), particularly neural networks, has become central to a wide array of applications — from language modeling to text-to-image generation. Despite these achievements, ensuring the robustness of AI models remains a significant challenge. Robustness refers to the ability of models to maintain performance across diverse inputs and avoid issues such as out-of-distribution failures, generation of harmful or incorrect content, and the propagation of social biases. Addressing robustness is crucial for deploying reliable AI systems in real-world scenarios.
Motivated by these challenges, this thesis aims to improve the understanding, evaluation, and ultimately the robustness of AI models through interpretability-based methods. Interpretability research, which aims to elucidate the decision-making processes of these models, offers a promising pathway to address robustness challenges with customizable and cost-effective methods. In this seminar, I will present our research on enhancing AI robustness by applying insights from interpretability studies, focusing on mitigating biases, reducing harmful content, improving adaptability, and addressing hallucinations.
Homomorphic encryption enables public computation over encrypted data. In the past few decades, homomorphic encryption has become a staple of both the theory and practice of cryptography.
Nevertheless, while there is a general loose understanding of what it means for a scheme to be homomorphic, to date there is no single unifying minimal definition that captures all schemes. In this work, we propose a new definition, which we refer to as combinatorially homomorphic encryption, which attempts to give a broad base that captures the intuitive meaning of homomorphic encryption and draws a clear line between trivial and nontrivial homomorphism.
Our notion relates the ability to accomplish some task when given a ciphertext, to accomplishing the same task without the ciphertext, in the context of communication complexity.
Thus, we say that a scheme is combinatorially homomorphic if there exists a communication complexity problem f(x, y) (where x is Alice’s input and y is Bob’s input) which requires communication c, but can be solved with communication less than c when Alice is given in addition also an encryption Ek(y) of Bob’s input (using Bob’s key k).
We show that this definition indeed captures pre-existing notions of homomorphic encryption and (suitable variants are) sufficiently strong to derive prior known implications of homomorphic encryption in a conceptually appealing way. These include constructions of (lossy) public-key encryption from homomorphic private-key encryption, as well as collision-resistant hash functions and private information retrieval schemes.
In strategic classification, the standard supervised learning setting is extended to support the notion of strategic user behavior in the form of costly feature manipulations made in response to a classifier. While standard learning supports a broad range of model classes, the study of strategic classification has, so far, been dedicated mostly to linear classifiers. This work aims to expand the horizon by exploring how strategic behavior manifests under non-linear classifiers and what this implies for learning. We take a bottom-up approach showing how non-linearity affects decision boundary points, classifier expressivity, and model classes complexity. A key finding is that universal approximators (e.g., neural nets) are no longer universal once the environment is strategic. We demonstrate empirically how this can create performance gaps even on an unrestricted model class.