Single Image Depth Estimation with Normal Guided Scale Invariant Deep Convolutional Fields

Date

2017-11-13

Authors

Yan, Han
Yu, Xin
Zhang, Yu
zhang, Shunli
Zhao, Xiaolin
Zhang, Li

Journal Title

Journal ISSN

Volume Title

Publisher

Institute of Electrical and Electronics Engineers (IEEE Inc)

Abstract

Estimating scene depth from a single image can be widely applied to understand 3D environments due to the easy access of the images captured by consumer-level cameras. Previous works exploit Conditional Random Fields (CRF) to estimate image depth, where neighboring pixels (superpixels) with similar appearance are constrained to share the same depth. However, the depth may vary significantly in the slanted surface, thus leading to severe estimation errors. In order to eliminate those errors, we propose a superpixel based normal guided scale invariant deep convolutional field by encouraging the neighboring superpixels with similar appearance to lie on the same 3D plane of the scene. In doing so, a depth-normal multitask CNN is introduced to produce the superpixel-wise depth and surface normal predictions simultaneously. To correct the errors of the roughly estimated superpiexl-wise depth, we develop a normal guided scale invariant CRF (NGSI-CRF). NGSI-CRF consists of a scale invariant unary potential which is able to measure the relative depth between superpixels as well as the absolute depth of superpixels, and a normal guided pairwise potential which constrains spatial relationships between superpixels in accordance with the 3D layout of the scene. In other words, the normal guided pairwise potential is designed to smooth the depth prediction without deteriorating the 3D structure of the depth prediction. The superpixel-wise depth maps estimated by NGSI-CRF will be fed into a pixel-wise refinement module to produce a smooth fine-grained depth prediction. Furthermore, we derive a closed-form solution for the maximum a posteriori (MAP) inference of NGSI-CRF. Thus, our proposed network can be efficiently trained in an end-to-end manner. We conduct our experiments on various datasets, such as NYU-D2, KITTI and Make3D. As demonstrated in the experimental results, our method achieves superior performance in both indoor and outdoor scenes.

Description

Keywords

Citation

Source

IEEE Transactions on Circuits and Systems for Video Technology

Type

Journal article

Book Title

Entity type

Access Statement

License Rights

Restricted until

2037-12-31