Abstract
Precise 6D pose estimation of rigid objects from RGB images is a critical but challenging task in robotics and augmented reality. To address this problem, we propose DeepRM, a novel recurrent network architecture for 6D pose refinement. DeepRM leverages initial coarse pose estimates to render synthetic images of target objects. The rendered images are then matched with the observed images to predict a rigid transform for updating the previous pose estimate. This process is repeated to incrementally refine the estimate at each iteration. LSTM units are used to propagate information through each refinement step, significantly improving overall performance. In contrast to many 2-stage Perspective-n-Point based solutions, DeepRM is trained end-to-end, and uses a scalable backbone that can be tuned via a single parameter for accuracy and efficiency. During training, a multi-scale optical flow head is added to predict the optical flow between the observed and synthetic images. Optical flow prediction stabilizes the training process, and enforces the learning of features that are relevant to the task of pose estimation. Our results demonstrate that DeepRM achieves state-of-the-art performance on two widely accepted challenging datasets.
Library of Congress Subject Headings
Deep learning (Machine learning); Computer vision; Gesture recognition (Computer science); Neural networks (Computer science)
Publication Date
5-2022
Document Type
Thesis
Student Type
Graduate
Degree Name
Computer Engineering (MS)
Department, Program, or Center
Computer Engineering (KGCOE)
Advisor
Andreas Savakis
Advisor/Committee Member
Dongfang Liu
Advisor/Committee Member
Clark Hochgraf
Recommended Citation
Avery, Alexander, "DeepRM: Deep Recurrent Matching for 6D Pose Refinement" (2022). Thesis. Rochester Institute of Technology. Accessed from
https://repository.rit.edu/theses/11224
Campus
RIT – Main Campus
Plan Codes
CMPE-MS