Adversarially Robust Few-shot Learning via Parameter Co-distillation of Similarity and Class Concept Learners

Date

Authors

Dong, Junhao
Koniusz, Piotr
Chen, Junxi
Xie, Xiaohua
Ong, Yew Soon

Journal Title

Journal ISSN

Volume Title

Publisher

IEEE Computer Society

Access Statement

Research Projects

Organizational Units

Journal Issue

Abstract

Few-shot learning (FSL) facilitates a variety of computer vision tasks yet remains vulnerable to adversarial attacks. Existing adversarially robust FSL methods rely on either visual similarity learning or class concept learning. Our analysis reveals that these two learning paradigms are complementary, exhibiting distinct robustness due to their unique decision boundary types (concepts clustering by the visual similarity label vs. classification by the class labels). To bridge this gap, we propose a novel framework unifying adversarially robust similarity learning and class concept learning. Specifically, we distill parameters from both network branches into a 'unified embedding model' during robust optimization and redistribute them to individual network branches periodically. To capture generalizable robustness across diverse branches, we initialize adversaries in each episode with cross-branch class-wise 'global adversarial perturbations' instead of less informative random initialization. We also propose a branch robustness harmonization to modulate the optimization of similarity and class concept learners via their relative adversarial robustness. Extensive experiments demonstrate the state-of-the-art performance of our method in diverse few-shot scenarios.

Description

Citation

Source

Book Title

Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024

Entity type

Publication

Access Statement

License Rights

Restricted until