Multivariate prototype representation for domain-generalized incremental learning
| dc.contributor.author | Peng, Can | en |
| dc.contributor.author | Koniusz, Piotr | en |
| dc.contributor.author | Guo, Kaiyu | en |
| dc.contributor.author | Lovell, Brian C. | en |
| dc.contributor.author | Moghadam, Peyman | en |
| dc.date.accessioned | 2025-05-23T15:26:16Z | |
| dc.date.available | 2025-05-23T15:26:16Z | |
| dc.date.issued | 2024 | en |
| dc.description.abstract | Deep learning models often suffer from catastrophic forgetting when fine-tuned with samples of new classes. This issue becomes even more challenging when there is a domain shift between training and testing data. In this paper, we address the critical yet less explored Domain-Generalized Class-Incremental Learning (DGCIL) task. We propose a DGCIL approach designed to memorize old classes, adapt to new classes, and reliably classify objects from unseen domains. Specifically, our loss formulation maintains classification boundaries while suppressing domain-specific information for each class. Without storing old exemplars, we employ knowledge distillation and estimate the drift of old class prototypes as incremental training progresses. Our prototype representations are based on multivariate Normal distributions, with means and covariances continually adapted to reflect evolving model features, providing effective representations for old classes. We then sample pseudo-features for these old classes from the adapted Normal distributions using Cholesky decomposition. Unlike previous pseudo-feature sampling strategies that rely solely on average mean prototypes, our method captures richer semantic variations. Experiments on several benchmarks demonstrate the superior performance of our method compared to the state of the art. | en |
| dc.description.sponsorship | We thank Dr. Qianhui Men for her help, discussion, and support. This work was partially funded by CSIRO's Reinvent Science and CSIRO's Data61 Science Digital. The authors gratefully acknowledge continued support from the CSIRO's Data61 Embodied AI Cluster. | en |
| dc.description.status | Peer-reviewed | en |
| dc.identifier.issn | 1077-3142 | en |
| dc.identifier.other | ORCID:/0000-0002-6340-5289/work/184098505 | en |
| dc.identifier.scopus | 85208473124 | en |
| dc.identifier.uri | http://www.scopus.com/inward/record.url?scp=85208473124&partnerID=8YFLogxK | en |
| dc.identifier.uri | https://hdl.handle.net/1885/733752580 | |
| dc.language.iso | en | en |
| dc.rights | Publisher Copyright: © 2024 The Authors | en |
| dc.source | Computer Vision and Image Understanding | en |
| dc.subject | Domain generalization | en |
| dc.subject | Incremental learning | en |
| dc.title | Multivariate prototype representation for domain-generalized incremental learning | en |
| dc.type | Journal article | en |
| dspace.entity.type | Publication | en |
| local.contributor.affiliation | Peng, Can; CSIRO | en |
| local.contributor.affiliation | Koniusz, Piotr; School of Computing, ANU College of Systems and Society, The Australian National University | en |
| local.contributor.affiliation | Guo, Kaiyu; University of Queensland | en |
| local.contributor.affiliation | Lovell, Brian C.; University of Queensland | en |
| local.contributor.affiliation | Moghadam, Peyman; CSIRO | en |
| local.identifier.citationvolume | 249 | en |
| local.identifier.doi | 10.1016/j.cviu.2024.104215 | en |
| local.identifier.pure | 829d84bc-0de1-47ac-8dc9-7be6689eab1f | en |
| local.identifier.url | https://www.scopus.com/pages/publications/85208473124 | en |
| local.type.status | Published | en |