Wednesday, 30.12.2009, 14:00
Modern software employs dynamic memory allocation to support its memory needs. Allocation and de-allocation of memory create fragmentation: holes between the allocated objects in the memory may be too small to further satisfy future allocation. Fragmentation complicates memory allocation. Furthermore, fragmentation creates a space overhead sometimes making a program crash even though enough memory for allocation is available. Compaction defragments the memory by compacting all obj...
[ Full version ]
Ari Frank (CS and Engineering, University of California, San Diego)
Wednesday, 30.12.2009, 13:30
Tandem mass spectrometry (MS/MS) has emerged as the tool of choice for
high-throughput proteomics analysis. In a typical MS/MS experiment, a protein
mixture sample is digested to peptides and is chromatographically separated.
Homogenous sets of peptide molecules are then selected by the mass spectrometer
and fragmented via collision induced dissociation. The outcome of this process
is recorded as a mass spectrum, which is a list of fragment masses and their
corresponding int...
[ Full version ]
Ariel Gabizon (Faculty of Mathematics and Computer Science The Weizmann Institute of Science)
Wednesday, 30.12.2009, 12:30
Let F be the finite field of q elements. An (n,k)-affine extractor is a
coloring of F^n with 2 colors, such that every affine subspace of
dimension at least k in F^n is colored in a balanced way. Roughly
speaking, the problem of explicitly constructing affine extractors gets
harder as the field size q gets smaller and easier as k gets larger.
In this work, we construct affine extractors whenever q = \Omega(
(n/k)^2), provided that F has characteristic \Omega(n/k).
For ...
[ Full version ]
Guillermo Sapiro (University of Minnesota)
Wednesday, 30.12.2009, 10:30
We present Video SnapCut, a robust video object cutout system that significantly advances the state-of-the-art. In our system segmentation is achieved by the collaboration of a set of local classifiers, each adaptively integrating multiple local image features. We show how this segmentation paradigm naturally supports local user editing and propagates them across time. The object cutout system is completed with a novel coherent video matting technique. A comprehensive evaluation a...
[ Full version ]
Prof. Kenichi Kanatani (Computer Science, Okayama University, Okayama, Japan)
Tuesday, 29.12.2009, 11:30
Fitting an algebraic equation to observed data is one of the first
steps of many computer vision applications. For example, we fit lines
and curves to points in 2D and planes and surfaces in 3D. Computing
computing the fundamental matrix or the homography matrix can also be
viewed as fitting in a high-dimensional space.
A naive way for this is the least squares, also known as ``algebraic
distance minimization'', minimizing the sum of squares of terms that
should b...
[ Full version ]
Peter Komjath (Eotvos University, Budapest)
Monday, 28.12.2009, 14:30
If D is a countable set of positive reals, let X_n(D) be the following graph: the vertices
are the points of the n-dimensional Euclidean space, two points joined if their distance
is in D. We determine the list-chromatic number of these graphs as much as possible...
[ Full version ]
Shai Kels (School of mathematical sciences, Applied math, Tel Aviv Univ.)
Sunday, 27.12.2009, 13:00
We introduce a new geometric definition of affine combinations of two
compact subsets of R^n.
The 4-point interpolatory subdivision schemes for points is adapted to
sets, by first expressing the insertion rule in terms of repeated binary
averages, and then replacing these affine
combinations of pairs of points by the new affine combinations of sets.
This subdivision scheme is applied to the reconstruction of
multidimensional
objects from cross-sections....
[ Full version ]
Muli Ben-Yehuda (IBM Research - Haifa)
Monday, 21.12.2009, 18:30
Server virtualization has been widely adopted by the market, and the number of servers running virtual machines is increasing daily. As machine virtualization gains popularity, the hypervisor itself, along with its management stack, becomes a basic and required part of the system. The next natural evolution phase in the virtualization abstraction chain is to view the hypervisor as part of the user workload, and to be able to run multiple hypervisors inside virtual machines, each w...
[ Full version ]
Yaki Setty (Computational Biology Group, Computational Science Laboratory Microsoft Research, Cambridge, UK)
Wednesday, 16.12.2009, 13:30
Biological systems are complex, involving feedbacks across numerous processes, mechanisms and objects. Experimental analysis, by its nature, rarely captures the behavior of the complete biological system as it divides the system into static interactions with reduced spatial dimensionality and minimal dynamics. To bridge the gap between experimental findings and the underlying system behavior, we formalize biological findings into mathematically and algorithmically rigorous specifi...
[ Full version ]
Dmitry Pechyony (NEC Labs and former CS department Ph.D. graduate)
Thursday, 10.12.2009, 14:30
In learning with privileged information (Vapnik, 2006) the labeled training examples have two views, primary and secondary, and the test examples have only a primary view. The goal of the learner is to leverage the information from these two views in order to build an accurate classifier in the primary view.
SVM+ algorithm, introduced by Vapnik, is a major tool for learning with privileged information. Recently Vapnik, Vashist and Pavlovich (2009) showed that by utilizi...
[ Full version ]
Reuven Bakalash (Lucid Technology Ltd.)
Tuesday, 08.12.2009, 11:30
Comparative study of real-time graphics architectures The graphics industry is undergoing dramatic changes. A variety of new graphics pipeline topologies are now emerging, converging between GPUs and multicore-based graphics processors, both leaning on high parallelism and increasing flexibility. The GPUs, based on hardware graphics pipeline, are becoming more flexible by increased programmability, precision and performance, having improved load balance by hundreds of unified shad...
[ Full version ]
Monday, 07.12.2009, 18:30
Aviv Sharon is a contributor to the game, mainly in the graphics area. In addition to the presentation of the game, we hope to establish a reasonably-fast IRC chat with a group of the leading developers of 0AD, on:
#0ad on QuakeNet. The purpose of the chat is to refer questions to the leading developers.
You are all invited to lurk on the channel, even if you miss the talk...
...
[ Full version ]
Artiom Myaskouvskey (CS, Technion)
Monday, 07.12.2009, 14:30
We describe a new part based detection algorithm.
The proposed algorithm uses an empirical model,
based on data extracted from the test image to bound
the probability that a model candidate arises from a noncategory-related image.
The decision is adaptive and does not rely on parameters optimized for possibly unrelated non-category images.
Finally, the decision process provides an bound on the number of false alarms in the image.
We explain the a-contrario method, deriv...
[ Full version ]
Monday, 07.12.2009, 14:30
The famous Hall's marriage theorem from 1935 states that in a bipartite
graph with sides M and W, there is a matching of M into W if and only if
every subset A of M has at least |A| neighbors in W. A generalization of
this theorem for r-partite hypergraphs was proved by Aharoni and Haxell in
2000, using topological methods. In 2005 Aharoni, Berger and Meshulam proved
a stronger version of this theorem, using a new function on graphs, called
the Gamma function. This function ...
[ Full version ]
Wednesday, 25.11.2009, 14:00
The well known learning models in Computational Learning Theory are either adversarial, meaning that the examples are arbitrarily selected by the teacher, or i.i.d., meaning that the teacher generates the examples independently and identically according to a certain distribution. However, it is also quite natural to study learning models in which the teacher generates the examples according to a stochastic process.
A particularly simple and natural time-driven process is the rand...
[ Full version ]
Prahladh Harsha (TIFR, Mumbai, India)
Wednesday, 25.11.2009, 12:30
In this talk, I will talk about a new invariance principle for polytopes
(intersection of halfspaces) and some applications of the same.
Invariance Principle: Let X be randomly chosen from {−1,1}^n, and let Y
be randomly chosen from a spherical Gaussian on R^n. For any polytope P
formed by the intersection of k halfspaces,
|Pr[X∈P]−Pr[Y∈P]|≤polylog(k)·∆,
where ∆ is a parameter that is small for “regular” polytopes. A polytope
is said to be regul...
[ Full version ]
Amir Kolaman (EE & CE Ben Gurion)
Tuesday, 24.11.2009, 11:30
Full reference image Quality Index (QI) is an important tool for a cheap and
fast assessment of image degradation by different algorithms. A short survey
and comparison of recent gray-scale and color image QI is presented. We
choose to divide degradation of images into two: luminance changes and
saturation changes. From the comparison we see that, in the case of
luminance changes, gray scale QI better correlates with Human Visual System
(HVS) test results. On the other hand ...
[ Full version ]
Tali Treibitz (Electrical Engineering, Technion)
Wednesday, 11.11.2009, 11:30
Images taken through a medium may suffer from poor visibility and loss of
contrast. Light passing through undergoes absorption and scattering.
Wavelength dependent attenuation causes changes in color and brightness. In
addition, light that is scattered back from the medium into the camera
(backscatter) veils the object, degrading visibility and contrast. Low
signal to noise ratio imposes resolution limits, even if there is no blur.
Moreover, refraction between the medium an...
[ Full version ]
Yair Rivenson (Electrical & Computer Engineering,
Ben-Gurion University)
Wednesday, 04.11.2009, 13:30
Compressive sensing is a relatively new theory that is based on
the fact that many natural signals (or images) can be sparsely
represented in some basis. The theory of compressive imaging is
applied to directly capture a compressed version of an object's
image. Here, we give an overview of different CI techniques, and
focus on two that were developed in our group. While implementing
conventional CS for imaging we have encountered some practical
implement...
[ Full version ]
Micha Sharir (Tel-Aviv University)
Wednesday, 04.11.2009, 13:30
Almost a year ago, Larry Guth and Nets Hawk Katz have obtained the
tight upper bound $O(n^{3/2})$ on the number of joints in a set of $n$
lines in 3-space, where a joint is a point incident to at least three
non-coplanar lines, thus closing the lid on a problem that has been
open for nearly 20 years. While this in itself is a significant
development, the groundbreaking nature of their work is the proof
technique, which uses fairly simple tools from algebraic geometry, a
tot...
[ Full version ]
Dr. Davide Migliore (CS, Politecnico di Milano)
Tuesday, 27.10.2009, 11:30
One of the principal goals of Computer Vision is to build artificial systems that are able to perceive and understand the world, obtaining information from images acquired by one or more cameras. The same objective, extended to generic sensors, can be found in Robotics, where autonomous robots require a complete knowledge of the environment to provide services and perform tasks as planning, obstacles avoidance or objects grasping. During this talk I will present a novel approach f...
[ Full version ]
David Khosid (Technion and Sandvine)
Monday, 21.09.2009, 18:30
In general, I will be sharing my discovery with you: When the debugging of modern software is required, basic GDB techniques are insufficient, but new techniques can be created from the nearly 160 commands available in GDB. "Modern software" refers to multi- threading, using STL and other libraries, IPC, signals and exception mechanisms. In this lecture, I will explain techniques for debugging large, modern software written in C++.
The presentation will be accompanied ...
[ Full version ]
Hadar Benyamini and Assaf Friedler (The Institute of Chemistry, The Hebrew University of Jerusalem)
Monday, 14.09.2009, 13:00
Biology Auditorium, Technion
The ASPP family proteins have a key role in apoptosis regulation. ASPP1
and ASPP2 promote, while iASPP inhibits p53 mediated apoptosis. ASPP
stands for _A_poptosis _S_timulating_P_rotein of _ p_53, as well as
representing the domain constitution: _A_nkyrin repeats, _S_H3 and
_P_roline rich containing _P_rotein. The ankyrin repeats and SH3 domains
(Ank-SH3) mediate the interactions of the ASPP family members with major
apoptosis regulators, including p53, Bcl-2 and NFκB....
[ Full version ]
Wednesday, 09.09.2009, 12:00
This work uses static data and control flow code analysis
to aid in proving properties of systems with aspects.
Some new categories of aspect advices (code segments) and
introduced methods are defined, beyond the existing ones.
The categories are chosen considering their effectiveness,
in that they both provide real aid in verification of
properties, and are automatically detectable using data
and control flow. The theoretical part of the thesis
investigates which linear t...
[ Full version ]
Monday, 07.09.2009, 18:30
Why do we need a cross-platform toolkit
The qmake build system
Strings, list + iterators in Qt4
signals+slots
Basic GUI building
The purpose of the lecture is not to teach Qt4, but to explain what is
available in Qt4 and motivate you to sit on your behind and learn it.
...
[ Full version ]
Tuesday, 01.09.2009, 11:30
Nikolay Skarbnik Department of Electrical Engineering The Technion Abstract: Signal's phase is a non-trivial quantity. It is therefore often ignored in favor of signal's magnitude. However, phase conveys more information regarding signal structure than magnitude does, especially in the case of images. It is therefore essential to use phase information in various signal/image processing schemes as well as in computer vision. This is true for global phase and, even more so, for loca...
[ Full version ]
Yoni Rabkin (Free Software Foundation)
Monday, 31.08.2009, 18:30
We explain the "what", "who" and "how" of the Free Software Foundation
GPL Compliance Lab. We go through five sample questions based on real
questions the lab received and discuss their answers. We finish with a
few words about GPL violations.
The lecture is intended for the general audience interested in free
software licensing and in particular the licenses published by the
FSF.
...
[ Full version ]
Yaron Lipman (Princeton University, USA)
Thursday, 20.08.2009, 14:00
We present a method for automatic surface comparison and alignment based on
principles from conformal geometry and optimal mass transportation.
One application of the method is efficient calculation of point
correspondences between surfaces. Another
application is a novel distance definition between disc-type surfaces which
forms a tractable alternative to the Gromov-Hausdorff distance.
Joint work with I. Daubechies and T. Funkhouser.
...
[ Full version ]
Prajakta Nimbhorkar (The Institute of Mathematical Sciences)
Wednesday, 19.08.2009, 15:00
Graph isomorphism is an important problem and has received lot
of attention over several years. Its complexity is still open. There have
been polynomial-time algorithms for several subclasses of graphs,
including planar graphs. For isomorphism of some of these graph classes,
the focus has been on determining the parallel complexity. The goal has
been to show the isomorphism problem for various graph classes to be
complete for some natural complexity class.
For pla...
[ Full version ]
Monday, 17.08.2009, 18:30
We will focus in this lecture about Linux Kernel sockets. Specifically, we will deal with the implementation
the following sockets in the Linux kernel:
UDP socket.
Raw sockets.
Unix Domain sockets.
Netlink sockets.
We will describe the mechanism of sending and receiveing packets with these types of kernel sockets, demonstrating it with short user space applications. We will also deal with Control Messages (aka "ancillary data") and demonstrate...
[ Full version ]
Rony Zatzarinni (EE Technion)
Tuesday, 11.08.2009, 11:30
Relief Analysis and Extraction We present an approach for extracting relieves and details from relief surfaces.
We consider a relief surface as a surface composed of two components: a base surface and a
height function which is defined over this base. However, since the base surface is unknown,
the decoupling of these components is a challenge. We show how to estimate a robust
height function over the base, without explicitly extracting the base surface.
This height func...
[ Full version ]
Niv Sabath (Biology and Biochemistry, University of Houston, TX )
Monday, 27.07.2009, 10:30
2nd floor Miklat, Biology Build.
As far as protein-coding genes are concerned,
there is a non-zero probability that at least one of
the five possible overlapping sequences of any
gene will contain an open-reading frame (ORF)
of a length that may be suitable for coding a
functional protein. It is, however, very difficult to
determine whether or not such an ORF is functional.
We developed a new method that employs
evolutionary principles to identify functional
overlapping genes based on the signat...
[ Full version ]
Sunday, 26.07.2009, 18:30
We will deal in this lecture with Mesh networking, kernel sockets,
netlink/rtnelink sockets and transport layer sockets.
...
[ Full version ]
Wednesday, 15.07.2009, 10:00
Tracking humans, understanding their actions and interpreting them are crucial to a great variety of applications. Tracking is used in automated surveillance, human-computer interface applications and in security applications. During the last decade extended research has been conducted on this subject. Analysis of human interactions is a complicated and challenging task for several reasons. First, the large number of body parts makes it hard to detect each part separately. Second,...
[ Full version ]
Wednesday, 15.07.2009, 10:00
Tracking humans, understanding their actions and interpreting them are crucial to a great variety of applications. Tracking is used in automated surveillance, human-computer interface applications and in security applications. During the last decade extended research has been conducted on this subject. Analysis of human interactions is a complicated and challenging task for several reasons. First, the large number of body parts makes it hard to detect each part separately. Second,...
[ Full version ]
Wednesday, 15.07.2009, 10:00
Tracking humans, understanding their actions and interpreting them are crucial to a great variety of applications. Tracking is used in automated surveillance, human-computer interface applications and in security applications. During the last decade extended research has been conducted on this subject. Analysis of human interactions is a complicated and challenging task for several reasons. First, the large number of body parts makes it hard to detect each part separately. Second,...
[ Full version ]
Wednesday, 15.07.2009, 10:00
Tracking humans, understanding their actions and interpreting them are crucial to a great variety of applications. Tracking is used in automated surveillance, human-computer interface applications and in security applications. During the last decade extended research has been conducted on this subject. Analysis of human interactions is a complicated and challenging task for several reasons. First, the large number of body parts makes it hard to detect each part separately. Second,...
[ Full version ]
Wednesday, 15.07.2009, 10:00
Tracking humans, understanding their actions and interpreting them are crucial to a great variety of applications. Tracking is used in automated surveillance, human-computer interface applications and in security applications. During the last decade extended research has been conducted on this subject. Analysis of human interactions is a complicated and challenging task for several reasons. First, the large number of body parts makes it hard to detect each part separately. Second,...
[ Full version ]
Monday, 13.07.2009, 18:30
This lecture is an overview of developing applications for Google's
Android. We start by introducing Android and its components, we look
at the anatomy of an Android application, we show the basic components
of Android application development including UI design and finally we
say some things about the development environment. The lecture is for
people with no prior knowledge of the Android system as it is only an
overview.
...
[ Full version ]
Guy Shinar (Uri Alon's Group, Molecular Cell Biology & Physics of Complex Systems, Weizmann Institute of Science)
Thursday, 09.07.2009, 13:30
Cell-to-cell and temporal variations in the concentrations of biomolecular components are inevitable. These variations in turn propagate along networks of chemical reactions to impart changes in the concentrations of still other species, such as phosphorylated transcription factors, that influence biological activity. Because excessive variations in the levels of certain active molecules might sometimes be deleterious to cell function, regulation systems have evolved that act to m...
[ Full version ]
Tuesday, 30.06.2009, 00:00
In this talk we will discuss the SLIC project developed at the University of Arizona and IBM Almaden Research center. The SLIC (Semantically Linked Instructional Content ) aims to assist students and scholars efficiently browse and seek segments of interest in educational videos of lectures and talks. In particular, it focuses on lectures that use slides, where the content of the slides file gives valuable hints as to how to break the video into meaningful parts (segments), and ho...
[ Full version ]
Monday, 29.06.2009, 18:30
KSM is a linux driver that allows dynamically sharing identical memory
pages between one or more processes. Unlike traditional page sharing
that is made at the allocation of the memory, ksm do it dynamically
after the memory was created. Memory is periodically scanned;
identical pages are identified and merged. The sharing is unnoticeable
by the process that use this memory. (the shared pages are marked as
readonly, and in case of write do_wp_page() take care to create new
...
[ Full version ]
Udi Wieder (Microsoft Research)
Wednesday, 24.06.2009, 15:00
Say we place m balls sequentially into n bins, where each ball is placed by
randomly selecting d bins and placing it in the least loaded of them. What is
the load of the maximum bin when m>>n? It is well known that when d=1 the
maximum load is m/n + \tildeO(sqrt(m/n)), whereas when d>=2 the maximum load is
m/n + loglog n. Thus when more than one bin is sampled, the gap between maximum
and average does not increase with the number of balls. We investigate a larger
famil...
[ Full version ]
Prof. Peter Stone (CS, The University of Texas at Austin, USA)
Sunday, 21.06.2009, 11:30
One goal of Artificial Intelligence is to enable the creation of robust, fully autonomous agents that can coexist with us in the real world. Such agents will need to be able to learn, both in order to correct and circumvent their inevitable imperfections, and to keep up with a dynamically changing world. They will also need to be able to interact with one another, whether they share common goals, they pursue independent goals, or their goals are in direct conflict. This talk wi...
[ Full version ]
Michael Viderman (CS, Technion)
Wednesday, 17.06.2009, 15:00
Locally testable codes (LTCs) are error-correcting codes for which
membership, in the code, of a given word can be tested by examining
it in very few locations. Most known constructions of locally
testable codes are linear codes, and give error-correcting codes
whose duals have (superlinearly) {\em many} small weight
codewords. Examining this feature appears to be one of the promising
approaches to proving limitation results for (i.e., upper bounds on
the rate of) LT...
[ Full version ]
Tuesday, 16.06.2009, 11:30
Compression of facial images using sparse and redundant representations and the K-SVD algorithm...
[ Full version ]
Monday, 15.06.2009, 18:30
Arduino
is an open source hardware platform. Since its launch it became much more than just a hobbyist playground. This presentation will introduce the Arduino platform, go over some of the messy details and explore its potential to the casual hacker. No electric engineering background is needed, some basic C knowledge is assumed.
...
[ Full version ]
Partha Mukhopadhyay (CS, Technion)
Wednesday, 10.06.2009, 15:00
Using ideas from automata theory we design a new efficient
(deterministic) identity test for the noncommutative
polynomial identity testing problem.
More precisely, given as input a noncommutative
circuit C computing a polynomial of degree d in the noncommutative ring
F{x_1,x_2,...,x_n} with at most t monomials,
where the variables x_i are noncommuting, we give a deterministic
polynomial identity test that checks if C=0 and runs in time
polynomial in d, n, size(C), an...
[ Full version ]
Troy Lee (Columbia University)
Sunday, 07.06.2009, 14:30
In the affine rank minimization problem, the goal is to minimize the
rank of a matrix subject to a set of affine constraints. This is in
general an NP-hard problem. We study a convex relaxation of this problem where
the rank objective function is replaced by the gamma_2 norm. The
gamma_2 norm can be viewed as a weighted version of the trace norm and can be
expressed as a semidefinite program.
We show that, given certain conditions on the constraint matrices, if the...
[ Full version ]
Arnab Bhattacharyya, (MIT)
Wednesday, 03.06.2009, 15:00
Let f_{1},f_{2}, f_{3} : F_2^n -> {0,1} be three Boolean functions. We
say a triple (x,y,x+y) is a triangle in the function triple (f_{1},
f_{2}, f_{3}) if f_{1}(x)=f_{2 (y)=f_{3}(x+y)= 1. (f_{1}, f_{2}, f_{3})
is said to be triangle-free if there is no triangle among the three
functions. The distance between a function triple and triangle-free
functions is the minimum fraction of the function values one needs to
modify in order to make the function triple triangle-fre...
[ Full version ]
Monday, 01.06.2009, 18:30
In this talk I describe several techniques that we developed to support the
generation of high quality code for the Cell Broadband Engine, addressing
some
of its key challenges and advantages.
The architecture of the Cell Broadband Engine developed jointly by Sony,
Toshiba, and IBM, represents a new direction in processor design.
In addition to a PowerPC-compatible Power Processor Element (PPE) the Cell
architecture features an array of eight Synergistic Processor
Elements...
[ Full version ]
Monday, 01.06.2009, 10:30
Geometric modeling deals with representing real objects in a virtual world. A popular geometric representation is the polygonal model. For many applications the need arises to improve the model while preserving its intrinsic geometric property, that it still describes the same "real" object. In this talk, we present a few applications, which although are quite different, use similar geometric concepts and mathematical machinery, the most noticeable being the notion of conformality...
[ Full version ]
Monday, 25.05.2009, 18:30
OpenCL is the first Cross-Architecture, Cross-OS, open standard for
parallel computing on heterogeneous systems. It targets GPU's, CPU's,
and other processing devices (like DSP's and Accelerators), providing
a uniform programming model of the system. OpenCL is designed to
support a wide range of usages, from GPGPU GFX usages (like Physics)
to High Performance Computing. This introduction will present OpenCL
Programming Model (platform, memory, compilation, etc.) & the OpenCL...
[ Full version ]
Daniel Reem (Mathematics, Technion)
Sunday, 24.05.2009, 13:00
Taub 401 (note special room)
Voronoi diagrams appear in many areas in science and technology and have
diverse applications. Roughly speaking, they are a certain decomposition of
a given space into cells, induced by a distance function and by a tuple of
subsets called the generators or the sites. Voronoi diagrams have been the
subject of extensive research during the last 35 years, and many algorithms
for computing them have been published. However, these algorithms are for
specific cases. They impose v...
[ Full version ]
Stas Goferman (EE, Technion)
Thursday, 21.05.2009, 11:30
Collages have been a common form of artistic expression since their first appearance in China around 200 BC. Recently, with the advance of digital cameras and digital image editing tools, collages have gained popularity also as a summarization tool. We propose an approach for automating collage construction, which is based on assembling exact cutouts of salient regions in a puzzle-like manner.
We show that this approach produces collages that are informative, compact, and eye-ple...
[ Full version ]
Monday, 18.05.2009, 18:30
gdb is one of the more powerful tools that you have as a programmer in
the UNIX environment. This talk is an introduction to gdb and how to make it work for you.
This is the second part of a 2-meetings set .
In the 1st meeting we have covered everything up to the section titled 'Where Have My Source Files Gone?' in the slides. In the second meeting we'll go over the rest (writing gdb macros, debugging multi-threaded programs and debugging multi-process prog...
[ Full version ]
Eli Ben-Sasson (Computer Science, Technion)
Wednesday, 13.05.2009, 15:00
An affine disperser over a field F for sources of dimension d is a
function f: F^n -> F that is nonconstant (i.e., takes more than one
value) on inputs coming from a d-dimensional affine subspace of F^n.
Affine dispersers have been considered in the context of deterministic
extraction of randomness from structured sources of imperfect
randomness, and in particular, generalize the notion of dispersers for
bit-fixing sources. Previously, explicit constructions ...
[ Full version ]
Srinivasa Narasimhan (Robotics Institute Carnegie Mellon University)
Tuesday, 12.05.2009, 11:30
Light sources and cameras are optical duals: sources emit light rays while the cameras capture them. This talk will argue that light sources can serve as better cameras in many applications, advancing the state of the art in computer vision. (a) By moving a light source instead of a camera, we show how to reconstruct highly intricate shapes like wreaths, corals and tree branches from hundreds of 'views'. (b) We leverage the 'illumination dithering' in the micromirror array of DLP ...
[ Full version ]
Tamir Tuller (School of Computer Science & Department of Molecular Microbiology and Biotechnology, Tel-Aviv University)
Thursday, 07.05.2009, 13:30
Translation Efficiency (TE) is a basic process of favoring codons with higher levels of tRNAs. In this talk, I will survey three recent results that are related to studying TE from a computational/systems biological aspect:
a) TE in humans is efficient: It is believed that in many unicellular organisms codon bias has evolved to optimize TE. Previous studies, however, have left the question of TE in humans an intriguingly open one. We perform the first large scale t...
[ Full version ]
Shachar Lovett (Weizmann Institute of Science)
Wednesday, 06.05.2009, 15:00
The widely held belief that BQP strictly contains BPP raises fundamental
questions: Upcoming generations of quantum computers might already be
too large to be simulated classically. Is it possible to experimentally
test that these systems perform as they should, if we cannot efficiently
compute predictions for their behavior? Vazirani has asked [Vaz07]: If
computing predictions for Quantum Mechanics requires exponential resources,
is Quantum Mechanics a falsifiable the...
[ Full version ]
Elad Eban (Hebrew University)
Wednesday, 22.04.2009, 15:30
The widely held belief that BQP strictly contains BPP raises fundamental
questions: Upcoming generations of quantum computers might already be
too large to be simulated classically. Is it possible to experimentally
test that these systems perform as they should, if we cannot efficiently
compute predictions for their behavior? Vazirani has asked [Vaz07]: If
computing predictions for Quantum Mechanics requires exponential resources,
is Quantum Mechanics a falsifiable the...
[ Full version ]
Gabriel Valiente (Technical University of Catalonia, the University of the Balearic Islands, and the Centre for Genomic Regulation, Barcelona)
Wednesday, 22.04.2009, 13:30
Bioinformatics Forum: It is well known that the string edit distance and the alignment of strings
coincide, while the alignment of trees differs from the tree edit distance. In this talk, we recall
various constraints on directed acyclic graphs that allow for a unique (up to isomorphism)
representation, called the path multiplicity representation, and present a new method for the
alignment of trees and directed acyclic graphs that exploits the path multiplicity representat...
[ Full version ]
Eugene V. Koonin (National Center for Biotechnology Information, NLM, NIH, USA)
Thursday, 02.04.2009, 13:30
Comparative genomics and systems biology reveal several surprising universals of genome evolution such as the distribution of evolution rates across genes, the distribution of paralogous gene family size, and connections between expression and evolution rate. The existence of these universals calls for simple, general models of evolutionary processes akin to those used in statistical physics (eg, birth and death processes), and at least in some cases, such models seem to explain t...
[ Full version ]
Tamir Hazan (School of Engineering and Computer Science
The Hebrew University of Jerusalem)
Tuesday, 31.03.2009, 11:30
We derive a one-parameter local message-passing algorithm, called "norm-product", which covers both the tasks of computing approximate marginal probabilities and maximum a posteriori (MAP) assignment for general graphical models. A parameter $\epsilon$ controls a perturbation term of a "fractional entropy approximation" $\tilde H$ which includes Bethe, Tree-reweighted (TRW) and convex entropy approximations. When $\tilde H$ is the Bethe approximation, the settings $\epsilon=0$ and...
[ Full version ]
Orna Agmon Ben-Yehuda (CS, Technion)
Monday, 30.03.2009, 18:30
OpenMp is a standard for shared memory parallelization of Fortran and c/c++. gcc 4.2 implements this standard in the gomp library. In the previous lecture we have reviewed some of the basic constructs. In this talk we will discuss the way to approach the task of using openMP in legacy code, and address some common implementation dilemmas the programmer is bound to hit.
...
[ Full version ]
Prahladh Harsha (UT Austin & Technion)
Wednesday, 25.03.2009, 15:00
In a recent breakthrough, Moshkovitz and Raz [MR] proved that NP can be
verified using two queries, sub-constant error, and nearly linear
proof length. The main technical component of their result is a new
composition theorem for certain specific 2-query PCPs. All earlier
composition theorems suffered from incurring an additional query per
composition.
We abstract their proof and prove a generic 2-query PCP composition
theorem with low error. More formally, we de...
[ Full version ]
Leon Peshkin (Systems Biology, Harvard Medical School, MA, USA)
Wednesday, 25.03.2009, 14:30
Shelter room, Biology Department, Technion
Fluorescent microscopy imaging has become one of the main tools of biological research. Most wet labs collect Gigabytes of images every day. The resulting data is almost exclusively annotated and analyzed by hand.
Such data contains a wealth of information potentially relevant to a variety of research questions. Yet, due to the labor-intensive nature of data analysis and huge volume, the precious data is discarded without a chance to
realize its full scientific value. A typical ...
[ Full version ]
Daniel Freedman (HP Labs)
Tuesday, 24.03.2009, 11:30
The Mean Shift procedure is a well established clustering technique that is widely used in imaging applications such as image and video segmentation, denoising, object tracking, texture classification, and others. However, the Mean Shift procedure has relatively high time complexity which is superlinear in the number of data points. In this paper we present a novel fast Mean Shift procedure which is based on the random sampling of the Kernel Density Estimate (KDE). We show theoret...
[ Full version ]
Boris N. Kholodenko (Systems Biology Institute, University College Dublin, Ireland)
Thursday, 19.03.2009, 14:30
Extracellular information received by plasma membrane receptors is encoded into complex temporal and spatial patterns of phosphorylation and topological relocation of signaling proteins. Integration of this information by protein kinase cascades creates the spatio-temporal code that confers signaling specificity and leads to important decisions that determine cell’s fate. Aberrant processing of signalling information is a leading cause of many human diseases that range from deve...
[ Full version ]
Klim Efremenko, Weizmann Institute
Wednesday, 18.03.2009, 18:30
Locally Decodable Codes (LDC) allow one to decode any particular
symbol of the input message by making a constant number of queries
to a codeword, even if a constant fraction of the codeword is
damaged. In a recent work ~\cite{Yekhanin08} Yekhanin constructs a
$3$-query LDC with sub-exponential length of size
$\exp(\exp(O(\frac{\log n}{\log\log n})))$. However, this
construction requires a conjecture that there are infinitely many
Mersenne primes. In this paper we give the ...
[ Full version ]
Wednesday, 18.03.2009, 14:00
Translation validation is a technique for formally establishing the semantic
equivalence of a source and a target of a code generator/compiler. In this
work we present a translation validation tool for the Real-Time Workshop code
generator that receives as input Simulink models and generates optimized C
code.
The lecture will begin with a demonstration of the Simulink models format and
automatically generated C code. It will then focus on the verification
condition that is ...
[ Full version ]
Sagi Snir (Institute of Evolution, Haifa University)
Thursday, 12.03.2009, 13:30
Bioinformatics Forum: The reconstruction of evolutionary trees (also known as “phylogenies”) is central to many problems in Biology. With the massive amounts of molecular data being produced, attention has increasingly turned to phylogenetic analyses of larger datasets, containing several hundreds to several thousand nucleotide sequences. Accordingly, a new program, “Assembling the Tree of Life”, has set the goal of producing a highly accurate estimate of the evolutionary...
[ Full version ]
Sunday, 01.03.2009, 11:30
The direct product code encodes the truth table of a function f:U-->R by
a function f^k:U^k -->R^k, which lists the value of f in all k-tuples of
inputs. We study its (approximate) proximity testing to this code in the
"list decoding" regime.
We give a 3-query tester for distance 1-exp(-k) (which is impossible
with 2 queries). We also give a 2-query tester for distance 1-1/poly(k)
for a derandomized version of this code (of polynomial rate).
We then...
[ Full version ]
Shahar Dag (CS, Techniono)
Thursday, 19.02.2009, 13:00
SSDL Laboratory, Taub Build., Floor 2
LINUX EVENT: We will discuss the ways to promote the use in Linux among students and to improve the SSDL service.
Prof. Yossi Gil will give a talk titled: "How to avoid in Linux programming".
...
[ Full version ]
Tuesday, 17.02.2009, 11:30
The common method of reconstructing a turbulence scene is through the creation of an artificial reference image. The reference image is usually obtained by averaging video through time. Using optical flow from that reference image to input images would give rise to such applications as: super-resolution, tracking and so forth.
However this technique suffers from several drawbacks: the resulting artificial reference frame is blurred, so calculated optical-flow fields are not prec...
[ Full version ]
Monday, 16.02.2009, 18:30
This lecture is a sequel to the Linux Kernel Networking lecture, Advanced Linux Kernel Networking - Neighboring Subsystem; IPSec and IPv6 in the Linux Kernel (Advanced Linux Kernel Networking).
- Short history of Linux Wireless : fullmac and softmac.
- Wireless Modes of operation: infrastructure mode, independent mode (IBSS), and more.
- The Linux Wireless stack (mac80211).
- MLME (the management layer).
- MLME operations: scanning, association....
[ Full version ]
Enav Weinreb (CS, Technion)
Sunday, 15.02.2009, 12:00
We consider the following question: given a two-argument boolean
function $f$, represented as an $N\times N$ binary matrix, how
hard is to determine the (deterministic) communication complexity
of $f$?
We address two aspects of this question. On the computational side,
we prove that, under appropriate cryptographic assumptions
(such as the intractability of factoring), the deterministic
communication complexity of $f$ is hard to approximate to within
some...
[ Full version ]
Thursday, 12.02.2009, 11:30
We address the open engineering problem of blind separation of time/position varying mixtures.
We show that there exists a large family of such mixtures that are separable without having prior
information about the sources or the mixing system. Unlike studies concerned with instantaneous
or convolutive mixtures, we relax the very-restrictive, widely-used, constraint of stationary in
time and position, and deal with the more practical and difficult problem of a mixing syste...
[ Full version ]
Shay Kels (Mathematics, Tel Aviv University)
Sunday, 08.02.2009, 13:00
In this talk we present an algorithm that applies segment Voronoi diagrams
and planar arrangements to the computation of the metric average of two
simple polygons. The algorithm is relevant to approximation of set-valued
functions. We describe the implementation of the algorithm and present a
collection of computational examples. Based on the computational framework
of the algorithm, the connectedness of the metric average of two simple
polygons is studied. Finally, we exten...
[ Full version ]
Ishai Haviv (Tel Aviv University)
Sunday, 08.02.2009, 12:00
We show a connection between the semi-definite relaxation of unique games and their behavior under parallel repetition.
Specifically, denoting by val(G) the value of a two-prover unique game G, and by sdpval(G) the value of a natural
semi-definite program to approximate val(G), we prove that for every large enough $\ell$, if sdpval(G) is at least
$1-\delta$, then the value of the $\ell$-times parallel repeated version of G is at least $sdpval(G)^{s \ell}$
where $s=O(\log(k...
[ Full version ]
Fima Koreban (EE, Technion)
Tuesday, 03.02.2009, 11:30
Stray light reflected by lens surfaces creates flare which affects the image. A pronounced form of this flare is aperture ghosting, where bright spots that resemble the shape of the lens aperture are overlayed on the image. This might disrupt image analysis. It occurs when a bright narrow source (usually the Sun) is in the vicinity of the field of view, though often the source may be outside the actual viewed field. This paper analyzes the geometry of this phenomenon. It theoretic...
[ Full version ]
Monday, 02.02.2009, 18:30
I'll give an overview of the Slashdot editorial process, and demonstrate the back-end tools (all running on Linux, mostly written in Perl, with an increasingly Ajax-based user interface) that we use to organize hundreds of submissions each day to select the handful that we post to the page. I'll also explain some of the software-based methods we use to identify and avoid abusive users, and answer any questions that I can. ...
[ Full version ]
Amit Ben-David (Industrial Engineering, Technion)
Sunday, 01.02.2009, 13:00
View-Dependent Texture Projection Mapping (VDTPM) is a technique used to
render a photorealistic novel view of a scene based on a number of aerial
photographs of the scene and an authentic 3D model of the scene.
Unlike traditional texture mapping, where the texture is "pasted" on the 3D
polygons, here the texture is placed on the object using texture projection.
Texture projection is a technique which uses the camera model of the
photograph to create a projective ...
[ Full version ]
Shlomo Moran (Computer Science, Technion)
Thursday, 29.01.2009, 14:30
Distance based reconstruction methods of phylogenetic trees are split into two independent parts: first, inter-species distances are inferred using some stochastic model of sequence evolution; then the inferred distances are used to construct a tree. Most research in this area focuses on the second task. This talk concentrates on the task of inter-species distance estimation, and provides a general characterization of distance functions for stochastic substitution models. Then it ...
[ Full version ]
Oded Regev (Tel Aviv University)
Sunday, 25.01.2009, 12:00
We consider one-round games between a classical verifier and two provers who share entanglement.
We show that when the constraints enforced by the verifier are `unique' constraints (i.e., permutations),
the value of the game can be well approximated by a semidefinite program. Essentially the only
algorithm known previously was for the special case of binary answers, as follows from the work
of Tsirelson in 1980. Among other things, our result implies that the variant of th...
[ Full version ]
Zvi Devir (Computer Science, Technion)
Monday, 19.01.2009, 18:30
Microsoft's End User License Agreement (EULA) allows customers to return unused copied of the
Windows operation system to the manufacturer, for a refund or credit. Not surprisingly,
the laptop manufacturers are not too happy with that, and they raise obstacles on customers who
would like to get a refund.
In this lecture I will talk about the first successful Windows Vista refund case in Israel.
...
[ Full version ]
Osnat Tal (Computer Science, Technion)
Sunday, 18.01.2009, 13:00
The classic Heilbronn's triangle problem is to maximize the area of the
smallest of the $\binom{n}{3}$ triangles determined by $n$ points in the
unit square. One aspect of the problem is finding the optimal locations
of the points. Another issue is finding lower and upper bounds on the
smallest triangle's area in the optimal configuration(s) of points.
With respect to the first aspect, mathematicians and computer scientists
have worked in the last decades to find good c...
[ Full version ]
Tamar Ziegler (Mathematics, Technion)
Sunday, 18.01.2009, 12:00
The Gowers uniformity norms U_k measure a certain kind of psuedo randomness.
For example, a function f on a finite (large) dimensional vector space V over a
finite field F has small U_2 norm if and only if all its Fourier coefficients are
small - i.e it has no significant correlation with linear phase functions.
The content of the inverse conjecture for the Gowers norms is that a similar relation
exists between the U_k norms and polynomials of degree smaller th...
[ Full version ]
Shai Bagon (The Weizmann Institute of Science)
Tuesday, 13.01.2009, 11:30
There is a huge diversity of definitions of "visually meaningful" image segments, ranging from simple uniformly colored segments, textured segments, through symmetric patterns, and up to complex semantically meaningful objects.
This diversity has led to a wide range of different approaches for image segmentation.
We present a single unified framework for addressing this problem - "Segmentation by Composition".
We define a good image segment as one which can be easily...
[ Full version ]
Sunday, 11.01.2009, 12:00
Computing the Fourier transform is a basic building block used in
numerous applications. For data intensive applications, even the O(N log
N) running time of the Fast Fourier Transform (FFT) algorithm may be too
slow, and sub-linear running time is necessary. Clearly, outputting the
entire Fourier transform in sub-linear time is infeasible, nevertheless,
in many applications it suffices to find only the \tau-significant
Fourier transform coefficients, that is, the Four...
[ Full version ]
Elhanan Borenstein (Stanford University & Santa Fe Institute)
Thursday, 08.01.2009, 13:30
The topology of metabolic networks may provide important insights not only into the metabolic capacity of species, but also into the habitats in which they evolved. In this talk I will present several analyses of metabolic networks and show how various ecological insights can be obtained from genomic-based data.
I will first introduce various environmental and genetic factors that affect the structure of metabolic networks. I will then present the first large-scale co...
[ Full version ]
Evgeniy Bart (Computational Vision Lab, California Institute of Technology)
Tuesday, 06.01.2009, 11:30
Organizing images is crucial for dealing efficiently with large image collections. In this talk, I will explore approaches to such an organization and its benefits. I introduce a non-parametric Bayesian model called TAX (similar to NCRP), which can organize images into a tree-shaped taxonomy
in an unsupervised manner.
The main conclusions are:
(a) images can be organized automatically, in a completely unsupervised manner;
(b) this organization is intuitively appealin...
[ Full version ]
Sunday, 04.01.2009, 18:30
Building hardware to run a Linux distro used to be quite tricky, involving plenty of trial and error with components while filing bug reports in various places. In the last year, hardware has reached the point where it is easier to build a Linux system that just works.
Cathy Malmrose, a Linux hardware builder from Berkeley, California, US will share what she has learned from various OEMs and will give a broad view of how and where to get Linux-ready hardware. She will ...
[ Full version ]
Sunday, 04.01.2009, 12:00
We consider private data analysis in the setting in which a trusted and trustworthy curator,
having obtained a large data set containing private information, releases to the public a
"sanitization'' of the data set that simultaneously protects the privacy of the individual
contributors of data and offers utility to the data analyst. We focus on the case where the
process is non-interactive; once the sanitization has been released the original data and the
curator play no...
[ Full version ]
Adi Shamir (The Weizmann Institute of Science)
Thursday, 01.01.2009, 14:30
In this talk I will introduce a new kind of attack (called Cube Attack)
on cryptographic schemes which can be represented by an (unknown)
low degree polynomial with tweakable public variables such as a plaintext
or IV and fixed secret variables such as a key. Its complexity is
exponential in the degree but only polynomial in the key size, and it was
successfully applied to several concrete cryptographic schemes.
The talk will be self contained, requiring no prior knowledg...
[ Full version ]