Joint discriminative and generative learning for person re-identification

Loading...
Thumbnail Image

Date

Authors

Zheng, Zhedong
Yang, Xiaodong
Yu, Zhiding
Zheng, Liang
Yang, Yi
Kautz, Jan

Journal Title

Journal ISSN

Volume Title

Publisher

IEEE

Abstract

Person re-identification (re-id) remains challenging due to significant intra-class variations across different cam- eras. Recently, there has been a growing interest in using generative models to augment training data and enhance the invariance to input changes. The generative pipelines in existing methods, however, stay relatively separate from the discriminative re-id learning stages. Accordingly, re-id models are often trained in a straightforward manner on the generated data. In this paper, we seek to improve learned re-id embeddings by better leveraging the generated data. To this end, we propose a joint learning framework that couples re-id learning and data generation end-to-end. Our model involves a generative module that separately encodes each person into an appearance code and a structure code, and a discriminative module that shares the appearance en- coder with the generative module. By switching the appear- ance or structure codes, the generative module is able to generate high-quality cross-id composed images, which are online fed back to the appearance encoder and used to im- prove the discriminative module. The proposed joint learn- ing framework renders significant improvement over the baseline without using generated data, leading to the state- of-the-art performance on several benchmark datasets.

Description

Keywords

Citation

Source

Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition

Book Title

Entity type

Access Statement

License Rights

Restricted until

2099-12-31