3D object detection and classification from point cloud data and monocular cameras is an essential task for many fields, such as autonomous navigation and augmented reality applications. Existing solutions that operate solely on the point cloud data will represent the data in a sparse manner to then input into a convolutional neural net. However, this approach tends to be inaccurate as the transformation of the data will always lose resolution. Solutions that operate on RGB images of the environment perform quite well, as these have been around for several years. The purpose of Sentinel Prime is to develop a robot to run a sensor fusion network which will combine 2D image data and 3D LIDAR data to beat the performance of a solely 2D or 3D network. The completed robot will be demoed on an indoor environment, shown to be correctly classifying several indoor objects.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.