Pixel Club: Shape Reconstruction: From Axiomatic Coded Light to Learning Stereo

Speaker:
Ron Slossberg (CS,Technnion)
Date:
Wednesday, 5.4.2017, 11:00
Place:
Room 337 Taub Bld.

1. Freehand Laser Scanning Using Mobile Phone
3D scanners are growing in their popularity as many new applications and products are becoming a commodity. These applications are often tethered to a computer and/or require expensive and specialized hardware. In this chapter of the thesis we demonstrate that it is possible to achieve good 3D reconstruction on a mobile device. We describe a novel approach for mobile phone scanning which utilizes a smart-phone and cheap laser pointer with a cylindrical lens which produces a line pattern attached to the phone using a 3D printed adapter. Non-linear multi-scale line filtering is used to detect the center of the projected laser beam in each frame with sub-pixel accuracy. The line location coupled with the estimated phone position and orientation in 3D space, obtained from publicly available SLAM libraries and marker tracking, permits us to perform a 3D reconstruction of a point cloud of the observed objects. Color and texture are extracted for every point along the scanned line point by projecting the reconstructed points back onto previous keyf-ramed images. We validate the proposed method by comparing the reconstruction error to the ground truth obtained from an industrial laser scanner.

2. Deep Stereo Matching with Dense CRF Priors
Stereo reconstruction from rectified images has recently been revisited within the context of deep learning. Using a deep Convolutional Neural Network to obtain patch-wise matching cost volumes has resulted in state of the art stereo reconstruction on classic datasets like Middlebury and Kitti. By introducing this cost into a classical stereo pipeline, the final results are improved dramatically over non-learning based cost models. However, these pipelines typically include hand engineered post processing steps to effectively regularize and clean the result. Here, we show that it is possible to take a more holistic approach by training a fully end-to-end network which directly includes regularization in the form of a densely connected CRF that acts as a prior on inter-pixel interactions. We demonstrate that our approach applied to both synthetic and real world datasets outperforms an alternative end-to-end network and compares favorably to less holistic

* Supervised by Professor Ron Kimmel

Back to the index of events