יורי פלדמן, הרצאה סמינריונית לדוקטורט
יום שלישי, 15.3.2022, 11:30
מנחה: Assoc. Prof. V. Indelman
Semantic perception is the process of acquiring and maintaining knowledge of the environment of a robot (or more generally - embodied agent) beyond geometric structure, i.e. capturing meaning - such as classes and other high-level properties of visible scene elements - as opposed to pure geometry. Semantic perception is key to enabling autonomous robots to operate in diverse, low-structured and dynamic environments and alongside humans. In the past decade semantic information has become increasingly available for robotics applications thanks to advances in processing of streams of raw data such as images and text, primarily using Machine Learning-based methods, as well as a persistent increase in compute power. However, while well-established methods exist for estimating and maintaining the geometric structure of a partially observable environment from local measurements (SLAM), these do not readily adapt to treating semantics.
We formulate the Semantic SLAM problem and develop novel approaches to address it. We show that Viewpoint-Dependent models can be used to tackle multiple challenges in treating semantic measurements: In particular, to account for correlation among time steps and for dataset shift, to facilitate data association and for semi-supervised learning of representation eliminating the need in discrete inference of semantics.