Domain adaptation by mixture of alignments of second- or higher-order scatter tensors
-
Altmetric Citations
Koniusz, Piotr; Tas, Yusuf
; Porikli, Fatih
Description
In this paper, we propose an approach to the domain adaptation, dubbed Second-or Higher-order Transfer of Knowledge (So-HoT), based on the mixture of alignments of second-or higher-order scatter statistics between the source and target domains. The human ability to learn from few labeled samples is a recurring motivation in the literature for domain adaptation. Towards this end, we investigate the supervised target scenario for which few labeled target training samples per category exist....[Show more]
dc.contributor.author | Koniusz, Piotr![]() | |
---|---|---|
dc.contributor.author | Tas, Yusuf![]() | |
dc.contributor.author | Porikli, Fatih![]() | |
dc.contributor.editor | O'Conner, Lisa | |
dc.coverage.spatial | Honolulu USA | |
dc.date.accessioned | 2020-09-14T03:34:32Z | |
dc.date.created | July 21-26 2017 | |
dc.identifier.isbn | 9781538604571 | |
dc.identifier.uri | http://hdl.handle.net/1885/210115 | |
dc.description.abstract | In this paper, we propose an approach to the domain adaptation, dubbed Second-or Higher-order Transfer of Knowledge (So-HoT), based on the mixture of alignments of second-or higher-order scatter statistics between the source and target domains. The human ability to learn from few labeled samples is a recurring motivation in the literature for domain adaptation. Towards this end, we investigate the supervised target scenario for which few labeled target training samples per category exist. Specifically, we utilize two CNN streams: the source and target networks fused at the classifier level. Features from the fully connected layers fc7 of each network are used to compute second-or even higher-order scatter tensors, one per network stream per class. As the source and target distributions are somewhat different despite being related, we align the scatters of the two network streams of the same class (within-class scatters) to a desired degree with our bespoke loss while maintaining good separation of the between-class scatters. We train the entire network in end-to-end fashion. We provide evaluations on the standard Office benchmark (visual domains) and RGB-D combined with Caltech256 (depth-to-rgb transfer). We attain state-of-the-art results. | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en_AU | |
dc.publisher | IEEE | |
dc.relation.ispartof | 30th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2017 | |
dc.rights | © 2017 IEEE | |
dc.source | Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2017 | |
dc.title | Domain adaptation by mixture of alignments of second- or higher-order scatter tensors | |
dc.type | Conference paper | |
local.description.notes | Imported from ARIES | |
local.description.refereed | Yes | |
dc.date.issued | 2017 | |
local.identifier.absfor | 080104 - Computer Vision | |
local.identifier.ariespublication | a383154xPUB9047 | |
local.publisher.url | https://www.ieee.org/ | |
local.type.status | Published Version | |
local.contributor.affiliation | Koniusz, Piotr, College of Engineering and Computer Science, ANU | |
local.contributor.affiliation | Tas, Yusuf, College of Engineering and Computer Science, ANU | |
local.contributor.affiliation | Porikli, Fatih, College of Engineering and Computer Science, ANU | |
local.description.embargo | 2037-12-31 | |
local.bibliographicCitation.startpage | 7139 | |
local.bibliographicCitation.lastpage | 7148 | |
local.identifier.doi | 10.1109/CVPR.2017.755 | |
local.identifier.absseo | 899999 - Information and Communication Services not elsewhere classified | |
dc.date.updated | 2020-06-23T00:53:03Z | |
local.identifier.scopusID | 2-s2.0-85041907871 | |
Collections | ANU Research Publications |
Download
File | Description | Size | Format | Image |
---|---|---|---|---|
01_Koniusz_Domain_adaptation_by_mixture_2017.pdf | 605.74 kB | Adobe PDF | Request a copy |
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.
Updated: 17 November 2022/ Responsible Officer: University Librarian/ Page Contact: Library Systems & Web Coordinator