Download PDFOpen PDF in browser

In-Plane Rotation-Aware Monocular Depth Estimation using SLAM

EasyChair Preprint no. 2636

13 pagesDate: February 10, 2020

Abstract

Estimating accurate depth from an RGB image in any environment is a challenging task in computer vision. Recent learning-based method using deep Convolutional Neural Networks (CNNs) have driven plausible appearance, but these conventional methods are not good at estimating scenes that have a pure rotation of camera, such as in-plane rolling. This movement imposes perturbations on learning-based methods because gravity direction is considered to be strong prior to CNN depth estimation (i.e., the top region of an image has a relatively large depth, whereas the bottom region tends to have a small depth). To overcome this crucial weakness in depth estimation with CNN, we propose a simple but effective refining method that incorporates in-plane roll alignment using camera poses of monocular Simultaneous Localization and Mapping (SLAM). For the experiment, we used public datasets and also created our own dataset composed of mostly in-plane roll camera movements. Evaluation results on these datasets show the effectiveness of our approach.

Keyphrases: Convolutional Neural Network, Monocular Depth Estimation, Simultaneous Localization and Mapping

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:2636,
  author = {Yuki Saito and Ryo Hachiuma and Masahiro Yamaguchi and Hideo Saito},
  title = {In-Plane Rotation-Aware Monocular  Depth Estimation using SLAM},
  howpublished = {EasyChair Preprint no. 2636},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser