Distribution-Matching Embedding for Visual Domain Adaptation

Loading...
Thumbnail Image

Date

Authors

Baktashmotlagh, Mahsa
Harandi, M
Salzmann, M

Journal Title

Journal ISSN

Volume Title

Publisher

Journal of Machine Learning Research

Abstract

Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Distribution-Matching Embedding approach: An unsupervised domain adaptation method that overcomes this issue by mapping the data to a latent space where the distance between the empirical distributions of the source and target examples is minimized. In other words, we seek to extract the information that is invariant across the source and target data. In particular, we study two different distances to compare the source and target distributions: the Maximum Mean Discrepancy and the Hellinger distance. Furthermore, we show that our approach allows us to learn either a linear embedding, or a nonlinear one. We demonstrate the benefits of our approach on the tasks of visual object recognition, text categorization, and WiFi localization.

Description

Citation

Source

Journal of Machine Learning Research

Book Title

Entity type

Access Statement

Open Access

License Rights

DOI

Restricted until