Tom Avrech, M.Sc. Thesis Seminar
Sunday, 19.12.2021, 11:30
Advisor: Prof. E. Rivlin, and Dr. Chaim Baskin
Autonomous scene exposure and exploration in localization- and communication-denied areas -- useful for finding targets in unknown scenes, mainly when direct maneuvering of the vehicle is impossible -- remains a challenging problem in computer navigation.
In this work we propose a novel deep learning-based navigation approach that is able to solve this problem and demonstrate its ability in an even more complicated setup, i.e., when computational power is limited.
Our method works directly with the RGB camera input, not requiring any expensive sensors, and produces two coordinates, which we call ''Goto pixel'' and ''Lookat pixel'', delineating the movement and perception directions, correspondingly.
These flying-instruction pixels are optimized to expose the largest amount of currently unexplored areas.
In addition, we propose a way to generate a navigation-oriented dataset, enabling efficient training of our method using RGB and depth images.
Tests conducted in a simulator achieve promising results in terms of the quantity of areas unveiled and the distances to targets.