Stacked Learning to Search for Scene Labeling
Date
2017
Authors
Cheng, Feiyang
He, Xuming
Zhang, Hong
Journal Title
Journal ISSN
Volume Title
Publisher
Institute of Electrical and Electronics Engineers (IEEE Inc)
Abstract
Search-based structured prediction methods have shown promising successes in both computer vision and natural language processing recently. However, most existing search-based approaches lead to a complex multi-stage learning process, which is ill-suited for scene labeling probelms with a high-dimensional output space. In this paper, a stacked learning to search method is proposed to address scene labeling tasks. We design a simplified search process consisting of a sequence of ranking fucntions, which are learned based on a stacked learning strategy to prevent over-fitting. Our method is able to encode rich prior knowledge by incorporating a variety of local and global scene features. In addition, we estimate a labeling confidence map to further improve the search efficiency from two aspects: first, it constrains the search space more effectively by pruning out low-quality solutions based on confidence scores and second, we employ the confidence map as an additional ranking feature to improve its prediction performance and thus reduce the search steps. Our approach is evaluated on both semantic segmentation and geometric labeling tasks, including the Stanford Background, Sift Flow, Geometric Context, and NYUv2 RGB-D data set. The competitive results demonstrate that our stacked learning to search method provides and effective alternative paradigm for scene labeling.
Description
Keywords
Citation
Collections
Source
IEEE Transactions on Image Processing
Type
Journal article
Book Title
Entity type
Access Statement
License Rights
Restricted until
2099-12-31
Downloads
File
Description