Withdraw
Loading…
3D sensing and mapping using mobile color and depth sensors
Pahwa, Ramanpreet Singh
Loading…
Permalink
https://hdl.handle.net/2142/98125
Description
- Title
- 3D sensing and mapping using mobile color and depth sensors
- Author(s)
- Pahwa, Ramanpreet Singh
- Issue Date
- 2017-06-20
- Director of Research (if dissertation) or Advisor (if thesis)
- Do, Minh N.
- Doctoral Committee Chair(s)
- Do, Minh N.
- Committee Member(s)
- Hoiem, Derek
- Hasegawa-Johnson, Mark A.
- Lu, Jiangbo
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Depth cameras
- Calibration
- Object proposals
- Image stitching
- Cylindrical image
- Abstract
- An important recent development in the visual information acquisition field is the emergence of low cost depth sensors that measure the scalar distance between the camera and the surrounding objects present in the scene. These depth sensors project infrared rays that are invisible to humans to measure the scene distance at each pixel. These sensors already have had a significant impact on various fields such as computer vision, gaming, augmented reality, and robotic vision. However, like every new technology, depth cameras also suffer from severe limitations such as low resolution, significant noise, lens distortion, and inability to work in outdoor environments. Depth cameras need to be calibrated accurately before they can be used along with color cameras to perform various tasks such as 3D reconstruction, action recognition, scene sensing and augmented virtual reality. This thesis investigates novel methods to measure and correct for these distortions and use the denoised measurements for various applications in vision related fields. In particular, we tackle the following problems: First, we propose a novel algorithm that takes in few depth images and utilizes them to simultaneously denoise and calibrate time-of-flight based depth cameras. Our formulation is based on two key elements. We first use depth planarization in 3D to denoise the depth at each corner pixel. Thereafter, we use these improved depth measurements along with the corner pixel information to estimate the calibration parameters using a non-linear estimation algorithm. We demonstrate that our framework estimates the intrinsic and extrinsic calibration parameters more accurately using fewer images and corners than are needed for traditional camera calibration. We evaluate our approach on both a synthetic dataset where ground truth information is available, and real data taken from a photon mixing device (PMD) camera. In both cases, we demonstrate that our proposed framework outperforms traditional calibration technique without significant increase in computational complexity. Second, we use the depth information provided by such cameras along with the color information for 3D object proposals in a given scene. We use a generic 2D object proposal technique as an input to our system and perform depth based filtering to create a heatmap of each frame by exploiting the scene geometry. We further use these heatmaps to remove any supporting planes present in the scene. Thereafter, we fuse the heatmaps of each frame in 3D using camera pose to build a 3D point cloud of the scene and assign each point a ranking based on its importance in the scene. We perform density based clustering on these top ranked points to compute precise 3D bounding boxes in the scene that have a high probability of containing an object of interest. Third, we integrate depth sensors and external geometry of the scene to robustly stitch images captured in a cylindrical tunnel where the camera moves forward in a spiral fashion. We utilize structure-from-motion (SfM) to estimate camera pose between adjacent frames. We exploit scene geometry to identify outliers among matching points and use bundle adjustment (BA) to improve the camera pose. We use depth sensors attached to the color camera to estimate the camera’s translation. Thereafter, we create an immersive 3D display in Unity 3D rendering engine to display the stitched scenes in a cylindrical projection where the user can fly through the scene using keyboard and mouse controls. In the future, we intend to improve bundle adjustment for automatic stitching of tunnel-like scenes by exploiting the known geometry of the scene to make it more robust to outliers.
- Graduation Semester
- 2017-08
- Type of Resource
- text
- Permalink
- http://hdl.handle.net/2142/98125
- Copyright and License Information
- Copyright 2017 Ramanpreet Singh Pahwa
Owning Collections
Dissertations and Theses - Electrical and Computer Engineering
Dissertations and Theses in Electrical and Computer EngineeringGraduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…