אירועים
אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב
יום שלישי, 10.05.2022, 11:00
This talk is about low-level fundamental visual motion cues that can help autonomous vehicles navigate in unknown structured and unstructured environments. Following bio-inspired and behavior-based observations and motivations, the talk focuses on relevant concepts and recent results as obtained from simulated and real data.
Some of the visual cues, e.g., the“visual looming” cue, are environment, scale, and rotation independent, and are measured in time units. Obtaining the visual cues needs only one camera and does not involve reconstruction of the 3d scene. Also, the ego-motion of the observer need not be known. If available, visual looming results can also be obtained using depth information, such as range from LiDAR.
We share some analytical results for6-DoF relative motion for how to estimate looming from raw data using optical flow. We also share simulation results for how to obtain vision-based threat zones that can help agents to avoid collision by affecting the heading and speed of the observer. Multiple autonomous agents are simulated to show emerging collective behaviors.
Together with another visual cue, the angular velocity (motion field), the visual looming can be used to “map” space in terms of time, using spheres and cylinders thatexpand and shrink, depending on specific relative motions.
We conclude the talk by sharing some of the most recent results. Specifically, a visual constancy invariant, and a cue that can be used to navigate in tunnels.
Bio:
Daniel Raviv received his B.Sc. and M.Sc. degrees from the Technion, and his Ph.D. from Case Western Reserve University in Cleveland, Ohio. He is a professor at Florida Atlantic University (FAU) where he is the Director of the innovation and Entrepreneurship Lab. In the past, he served as the assistant provost for innovation. Dr. Raviv taught at Johns Hopkins University, the Technion, and the University of Maryland, and was a visiting researcher at theNational Institute of Standards and Technology (NIST) as part of a group that developed a vision-based driverless vehicle for the US Army (HUMVEE; 65 mph).
His related research work includes exploration of visual invariants that exist only during motion and can be used for real-time closed-loop control systems of cars and drones. He is also interested in teaching and learning innovative thinking, and how to teach innovatively. He is the author of five books: three on learning innovative thinking and two on teaching in visual, intuitive, and engaging ways.