אירועים
אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב
ודים אינדלמן - Georgia Tech
יום שלישי, 19.03.2013, 11:30
חדר 1061, בניין מאייר, הפקולטה להנדסת חשמל
This talk will focus on efficient methods for single- and multi-robot
localization and structure from motion (SfM) related problems such as
mobile vision, augmented reality and 3D reconstruction. High-rate
performance and high accuracy are a challenge, in particular when
operating in large scale environments, over long time periods and in
presence of loop closure observations. This challenge is further
enhanced in multi-robot configurations, where communication and
computation budgets are limited and consistent information fusion
should be enforced. In this talk, I will describe approaches that
address these challenges.
First, I will present an incremental and computationally efficient
method for bundle adjustment that substantially reduces the involved
computational cost, compared to state-of-the-art bundle adjustment
techniques. The method, incremental light bundle adjustment (iLBA),
incorporates two key components: the observed 3D points are
algebraically eliminated, leading to a cost function that is
formulated in terms of multi-view constraints instead of the
projection equations, thereby reducing the number of variables in the
optimization. While only the pose variables (or navigation states) are
optimized, if required, the observed 3D points, or any part of them,
can be reconstructed based on the optimized poses. The second
component is the recently developed incremental smoothing approach,
which uses graphical models to adaptively identify the variables that
need to be recomputed at each step. The described method will be
demonstrated in SfM and robot navigation scenarios. Next, I will
continue by overviewing an approach to maintain high-rate performance
also in the presence of loop closure observations, since these cannot
be guaranteed, in the general case, to be processed at a sufficiently
high rate. The approach parallelizes computations by partitioning the
underlying graphical structure of the problem at hand.
The second part of the talk will focus on distributed multi-agent
localization and navigation. I will present approaches that exploit
commonly observed 3D points to both perform cooperative localization
and to extend sensing horizon of the robots in the group. A special
consideration will be given to consistent information fusion, while
minimizing communication between the robots and the computational
burden required for incorporating the information obtained from other
robots. Two methods will be described: sharing the estimated
distributions of observed 3D points, and sharing the actual (image)
observations of these points, thereby extending the iLBA approach.