קולוקוויום וסמינרים

כדי להצטרף לרשימת תפוצה של קולוקוויום מדעי המחשב, אנא בקר בדף מנויים של הרשימה.


Computer Science events calendar in HTTP ICS format for of Google calendars, and for Outlook.

Academic Calendar at Technion site.

קולוקוויום וסמינרים בקרוב

  • A Learning Approach to Geometric Matrix Completion

    דובר:
    מריה טוניק שמידט, הרצאה סמינריונית למגיסטר
    תאריך:
    יום שני, 22.7.2019, 10:00
    מקום:
    טאוב 601
    מנחה:
    Prof. A. Bronstein

    Different methods where presented through the years to find good solutions for the Matrix Completion Problem. This problem appears in many tasks in life where a sparse signal, which lies on a grid of two non- Euclidean domains (Graphs or manifolds), should be predicted or completed. A classic example of this problem is the “Netflix Problem” which appears in the field of “Recommendation Systems” or “recommender systems”. In those systems recommendations of specific items are given to users (like friends in Facebook, products on Amazon, links in Google, movies in YouTube, songs on Spotify, twits on Twitter etc.). In this work, we present a novel Learning Approach towards Geometric Matrix Completion on Non-Euclidean domains. Our approach towards the matrix completion problem suggests that when we are looking at the problem from the geometric point of view, neural networks can use a very strong prior for that problem. Hence, non-trained neural networks and learning methods that can be good for a single matrix completion tasks in the Euclidean domains (like image completion tasks), after re-definition, can be used also for the Matrix Completion problem on Non-Euclidean domains. This re-definition is possible using the building blocks and operators coming from the Non-Euclidean geometry suitable for Non-Euclidean domains like graphs and manifolds and redefinition of the learning network layers (like convolution and pooling) accordingly. Following that approach, we present a novel, fast, intuitive learning method for the “Matrix Completion Problem”: the Matrix Data Deep Decoder - the “MDDD”, which is parallel to the newest state of art method for Euclidean domains like images - the ‘Deep Decoder’, and get a state of the art result for that problem with a very compact network within minutes. Our suggested learning method implementation for the Matrix Completion Problem solution is a great method and the current state of art. However, our real contribution in this work is the proposition that neural networks for Non-Euclidean Data, when looking at that data from the geometric point of view, can be a very strong prior for this problem. This approach can be a basis for many future Non-Euclidean data completion methods and applications.

  • Program Synthesis for Programmers

    דובר:
    הילה פלג, הרצאה סמינריונית לדוקטורט
    תאריך:
    יום שלישי, 6.8.2019, 14:30
    מקום:
    טאוב 601
    מנחה:
    Prof. E. Yahav

    Recent years have seen great progress in automated synthesis techniques that can automatically generate code based on some intent expressed by the user, but communicating this intent remains a major challenge. When the expressed intent is coarse-grained (for example, restriction on the expected type of an expression), the synthesizer often produces a long list of results for the user to choose from, shifting the heavy-lifting to the user. An alternative approach is programming by example (PBE), where the user leverages examples to interactively and iteratively refine the intent. Existing program synthesis tools are usually designed around the synthesizer and its internals. However, these are tools intended for users, who are the ones who must specify (and respecify) the specifications. Synthesis tools are often designed either with no particular group of users in mind, or with the purpose of generating code for users who cannot write and read it. We suggest instead designing synthesis tools specifically for programmers. This allows making assumptions on both the input the user can generate and the output they can consume. Concepts that are part of the programmer's life such as code review and unit tests can be levereged. The common sense of making generalizations can be aided by the user. But this approach also brings with it restrictions for the synthesizer, which pose new design challenges: examples, a common tool, are not expressive enough for programmers, who can observe the generated program and refine the intent by directly relating to parts of the generated program. Additionally, can the users correctly judge when the program is correct? We suggested a new Granular Interaction Model (GIM) and performed a controlled user study to assess its effectiveness. In addition, we modeled the interaction of the user with a synthesizer, formalizing the refinement of specification by the user and the respective reduction of the candidate program space. This model allowed us to present two conditions for termination of a synthesis session, one hinging only on the properties of the available partial specifications, and the other also on the behavior of the user. Finally, we showed conditions for realizability of the user's intent, and limitations of backtracking when it is apparent a session will fail.

  • Learning for Numerical Geometry

    דובר:
    גאוטמ פאי, הרצאה סמינריונית לדוקטורט
    תאריך:
    יום רביעי, 14.8.2019, 11:00
    מקום:
    טאוב 401
    מנחה:
    Prof. Ron Kimmel

    Numerical geometry comprises of principled computational methods that utilize theoretical insights from geometry along with the engineering concepts from numerical methods, for tackling various problems in geometric data analysis. In contrast, computational methods from recent advances in deep learning exhibit a black box nature where essential and meaningful features are learned from examples of training data leading to state-of-the-art results. This thesis explores a synergy between these two disparate computational philosophies. In particular, we integrate deep learning into computational methods of numerical geometry and propose neural network based alternatives to standard geometric algorithms. First, we demonstrate that we can learn an invariant geometric representation of planar curves using deep metric learning with a binary contrastive loss. Using just positive and negative examples of transformations, we show that a convolutional neural network is able to model an invariant function of a discrete planar curve and that such invariants show improved numerical properties in comparison to their axiomatic counterparts. Secondly, we demonstrate a scheme for deep isometric manifold learning for computing distance-preserving maps that generate low-dimensional embeddings for a certain class of high dimensional manifolds. We use the philosophy of Multidimensional Scaling to train a network using the distance preserving loss in a manifold learning setup. In addition to a straightforward out-of-sample extension, MDS action of the network is shown to have superior generalization abilities. Finally, we tackle shape correspondence using descriptor dependent kernels in a functional maps framework. We interpret such kernels as operators of functions defined on compact two dimensional Riemannian manifolds. By aggregating the pairwise information from the descriptor and the intrinsic geometry of the surface encoded in the heat kernel, we construct a hybrid kernel and call it the bilateral operator. By forcing the correspondence map to commute with the Bilateral operator, we show that we can maximally exploit the information from a given set of point-wise descriptors in a functional map framework.

  • Online Linear Models for Edge Computing

    דובר:
    הדר סיון, הרצאה סמינריונית למגיסטר
    תאריך:
    יום רביעי, 11.9.2019, 11:30
    מקום:
    טאוב 601
    מנחה:
    Prof. A. Schuster

    Maintaining an accurate trained model on an infinite data stream is challenging due to concept drifts that render a learned model inaccurate. Updating the model periodically can be expensive, and so traditional approaches for computationally limited devices involve a variation of online or incremental learning, which tend to be less robust. The advent of heterogeneous architectures and Internet-connected devices gives rise to a new opportunity. A weak processor can call upon a stronger processor or a cloud server to perform a complete batch training pass once a concept drift is detected -- trading power or network bandwidth for increased accuracy. We capitalize on this opportunity in two steps. We first develop a computationally efficient bound for changes in any linear model with convex, differentiable loss. We then propose a sliding window-based algorithm that uses a small number of batch model computations to maintain an accurate model of the data stream. It uses the bound to continuously evaluate the difference between the parameters of the existing model and a hypothetical optimal model, triggering computation only as needed. Empirical evaluation on real and synthetic datasets shows that our proposed algorithm adapts well to concept drifts and provides a better tradeoff between the number of model computations and model accuracy than classic concept drift detectors. When predicting changes in electricity prices, for example, we achieve 6% better accuracy than the popular EDDM, using only 20 model computations.