Abstract
Autonomous agents in any environment require accurate and reliable position and motion estimation to complete their required tasks. Many different sensor modalities have been utilized for this task such as GPS, ultra-wide band, visual simultaneous localization and mapping (SLAM), and light detection and ranging (LiDAR) SLAM. Many of the traditional positioning systems do not take advantage of the recent advances in the machine learning field. In this work, an omnidirectional camera position estimation system relying primarily on a learned model is presented. The positioning system benefits from the wide field of view provided by an omnidirectional camera. Recent developments in the self-supervised learning field for generating useful features from unlabeled data are also assessed. A novel radial patch pretext task for omnidirectional images is presented in this work. The resulting implementation will be a robot localization and tracking algorithm that can be adapted to a variety of environments such as warehouses and college campuses. Further experiments with additional types of sensors including 3D LiDAR, 60 GHz wireless, and Ultra-Wideband localization systems utilizing machine learning are also explored. A fused learned localization model utilizing multiple sensor modalities is evaluated in comparison to individual sensor models.
Library of Congress Subject Headings
Robots--Motion; Robots--Control systems; Machine learning; Sensor networks
Publication Date
5-2020
Document Type
Thesis
Student Type
Graduate
Degree Name
Computer Engineering (MS)
Department, Program, or Center
Computer Engineering (KGCOE)
Advisor
Raymond Ptucha
Advisor/Committee Member
Amlan Ganguly
Advisor/Committee Member
Clark Hochgraf
Recommended Citation
Relyea, Robert, "Improving Omnidirectional Camera-Based Robot Localization Through Self-Supervised Learning" (2020). Thesis. Rochester Institute of Technology. Accessed from
https://repository.rit.edu/theses/10448
Campus
RIT – Main Campus
Plan Codes
CMPE-MS