אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב
ג'ון נונאן (הרצאה סמינריונית למגיסטר)
יום ראשון, 14.02.2021, 12:00
Intelligent systems which can be deployed to explore indoor buildings on a frequent and regular basis are beneficial to personnel operating remotely for security, manufacturing, or warehouse pack-and-ship. In this talk, I will present a new minimalistic approach to indoor exploration: minimal sensing, minimal prior map knowledge, and minimal underlying geometry needed to facilitate building a full visual scene representation. Our research combines both the classical and deep learning worlds, harnessing the strengths of each, using a single camera and a floorplan to facilitate both indoor localization and building a full visual scene representation of the explored building, with a small robotic vehicle to carry out the exploration. We introduce a novel neural scene representation that scales to full indoor buildings for view synthesis, describing it with a space of local neural rendering functions across the building which facilitates infusing meta-knowledge into the learning. Shared knowledge of performing neural rendering from various vantage points in the scene is realized by conditioning on similar building structure, resulting in accelerated learning for the full building. We demonstrate learning such a neural scene representation for view synthesis in around 15 minutes on a single commodity GPU and rendering in real-time at 64 Hz, allowing for immersive visual experiences.
Indoor exploration also requires accurate global positioning. We formulate a core methodology of integrating a floorplan with a monocular camera, forming the basis for our positioning systems which resolve global position, orientation, and scale. We also present the theoretical analysis of planar criteria for uniqueness of global localization solutions. We develop multiple algorithms to handle various necessary components of indoor localization, such as extracting planes from scale-ambiguous monocular 3d pointclouds, associating extracted planes with floorplan walls, recovering the scale factor from wall-plane pairs, and integrating soft vehicle and floorplan constraints in an optimization to refine global poses.
We introduce multiple modular global positioning systems, both optimization-based and probabilistic approaches, and evaluate on custom-created synthetic, simulation, and real-world datasets experimented using a custom designed-and-built small robotic vehicle.