Withdraw
Loading…
GPS-LiDAR sensor fusion aided by 3D city models for UAVs
Shetty, Akshay Prabhakar
Loading…
Permalink
https://hdl.handle.net/2142/97501
Description
- Title
- GPS-LiDAR sensor fusion aided by 3D city models for UAVs
- Author(s)
- Shetty, Akshay Prabhakar
- Issue Date
- 2017-04-27
- Director of Research (if dissertation) or Advisor (if thesis)
- Gao, Grace Xingxin
- Department of Study
- Aerospace Engineering
- Discipline
- Aerospace Engineering
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- Global positioning system (GPS)
- Light detection and ranging (LiDAR)
- Unmanned aerial vehicles
- Sensor fusion
- Outdoor navigation
- Light detection and ranging (LiDAR) covariance
- Iterative closest point
- Abstract
- Recently, there has been an increase in outdoor applications for small-scale Unmanned Aerial Vehicles (UAVs), such as 3D modelling, filming, surveillance, and search and rescue. To perform these tasks safely and reliably, a continuous and accurate estimate of the UAVs’ positions is needed. Global Positioning System (GPS) receivers are commonly used for this purpose. However, navigating in urban areas using only GPS is challenging, since satellite signals might be reflected or blocked by buildings, resulting in multipath errors or non-line-of-sight (NLOS) situations. In such cases, additional on-board sensors are desirable to improve global positioning of the UAV. Light Detection and Ranging (LiDAR), one such sensor, provides a real-time point cloud of its surroundings. In a dense urban environment, LiDAR is able to detect a large number of features of surrounding structures, such as buildings, as opposed to in an open-sky environment. This characteristic of LiDAR complements GPS, which is accurate in open-sky environments, but may suffer large errors in urban areas. To fuse GPS and LiDAR measurements, Kalman Filtering and its variations are commonly used. However, it is important, yet challenging, to accurately characterize the error covariance of the sensor measurements. In this thesis, we propose a GPS-LiDAR fusion technique with a novel method for efficiently modelling the error covariance in position measurements based on LiDAR point clouds. For GPS measurements, we eliminate NLOS satellites and model the covariance based on the measurement signal-to-noise ratio (SNR) values. We use the LiDAR point clouds in two ways: to estimate incremental motion by matching consecutive point clouds; and, to estimate global pose by matching with a 3D city model. We aim to characterize the error covariance matrices in these two aspects as a function of the distribution of features in the LiDAR point cloud. To estimate the incremental motion between two consecutive LiDAR point clouds, we use the Iterative Closest Point (ICP) algorithm. We perform simulations in different environments to showcase the dependence of ICP on features in the point cloud. While navigating in urban areas, we expect the LiDAR to detect structured objects, such as buildings, which are primarily composed of surfaces and edges. Thus, we develop an efficient way for modelling the error covariance in the estimated incremental position based on each surface and edge feature point in the point cloud. A surface point helps to estimate motion of the LiDAR perpendicular to the surface, while an edge point helps to estimate motion of the LiDAR perpendicular to the edge. We treat each feature point independently and combine their individual error covariance to obtain a total error covariance ellipsoid for the estimated incremental position. For our 3D city model, we use elevation data of the State of Illinois available online and combine it with building information extracted from OpenStreetMap, a crowd-sourced mapping platform. We again use the ICP algorithm to match the LiDAR point cloud with our 3D city model, which provides us with an estimate of the UAV's global pose. Additionally, we also use the 3D city model to determine and eliminate NLOS GPS satellites. We use remaining pseudorange measurements from the on-board GPS receiver and a stationary reference receiver to create a vector of double-difference measurements. We create a covariance matrix for the GPS double-difference measurement vector based on SNR of the individual pseudorange measurements. Finally, all the above measurements and error covariance matrices are provided as an input to an Unscented Kalman Filter (UKF). The states of the filter include the globally referenced pose of the UAV. Before implementation, we perform an observability analysis for our filter. To validate our algorithm, we conduct UAV experiments in GPS-challenged urban environments on the University of Illinois at Urbana-Champaign campus. We observe that our model for the covariance ellipsoid from on-board LiDAR point clouds accurately represents the position errors and improves the filter output. We demonstrate a clear improvement in the UAV's global pose estimates using the proposed sensor fusion technique.
- Graduation Semester
- 2017-05
- Type of Resource
- text
- Permalink
- http://hdl.handle.net/2142/97501
- Copyright and License Information
- Copyright 2017 Akshay Shetty
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…