Multivariate prototype representation for domain-generalized incremental learning

dc.contributor.authorPeng, Canen
dc.contributor.authorKoniusz, Piotren
dc.contributor.authorGuo, Kaiyuen
dc.contributor.authorLovell, Brian C.en
dc.contributor.authorMoghadam, Peymanen
dc.date.accessioned2025-05-23T15:26:16Z
dc.date.available2025-05-23T15:26:16Z
dc.date.issued2024en
dc.description.abstractDeep learning models often suffer from catastrophic forgetting when fine-tuned with samples of new classes. This issue becomes even more challenging when there is a domain shift between training and testing data. In this paper, we address the critical yet less explored Domain-Generalized Class-Incremental Learning (DGCIL) task. We propose a DGCIL approach designed to memorize old classes, adapt to new classes, and reliably classify objects from unseen domains. Specifically, our loss formulation maintains classification boundaries while suppressing domain-specific information for each class. Without storing old exemplars, we employ knowledge distillation and estimate the drift of old class prototypes as incremental training progresses. Our prototype representations are based on multivariate Normal distributions, with means and covariances continually adapted to reflect evolving model features, providing effective representations for old classes. We then sample pseudo-features for these old classes from the adapted Normal distributions using Cholesky decomposition. Unlike previous pseudo-feature sampling strategies that rely solely on average mean prototypes, our method captures richer semantic variations. Experiments on several benchmarks demonstrate the superior performance of our method compared to the state of the art.en
dc.description.sponsorshipWe thank Dr. Qianhui Men for her help, discussion, and support. This work was partially funded by CSIRO's Reinvent Science and CSIRO's Data61 Science Digital. The authors gratefully acknowledge continued support from the CSIRO's Data61 Embodied AI Cluster.en
dc.description.statusPeer-revieweden
dc.identifier.issn1077-3142en
dc.identifier.otherORCID:/0000-0002-6340-5289/work/184098505en
dc.identifier.scopus85208473124en
dc.identifier.urihttp://www.scopus.com/inward/record.url?scp=85208473124&partnerID=8YFLogxKen
dc.identifier.urihttps://hdl.handle.net/1885/733752580
dc.language.isoenen
dc.rightsPublisher Copyright: © 2024 The Authorsen
dc.sourceComputer Vision and Image Understandingen
dc.subjectDomain generalizationen
dc.subjectIncremental learningen
dc.titleMultivariate prototype representation for domain-generalized incremental learningen
dc.typeJournal articleen
dspace.entity.typePublicationen
local.contributor.affiliationPeng, Can; CSIROen
local.contributor.affiliationKoniusz, Piotr; School of Computing, ANU College of Systems and Society, The Australian National Universityen
local.contributor.affiliationGuo, Kaiyu; University of Queenslanden
local.contributor.affiliationLovell, Brian C.; University of Queenslanden
local.contributor.affiliationMoghadam, Peyman; CSIROen
local.identifier.citationvolume249en
local.identifier.doi10.1016/j.cviu.2024.104215en
local.identifier.pure829d84bc-0de1-47ac-8dc9-7be6689eab1fen
local.identifier.urlhttps://www.scopus.com/pages/publications/85208473124en
local.type.statusPublisheden

Downloads