Transductive learning for zero-shot object detection

dc.contributor.authorRahman, Shafin
dc.contributor.authorKhan, Salman Hameed
dc.contributor.authorBarnes, Nick
dc.contributor.editorLee, Kyoung Mu
dc.contributor.editorForsyth, David
dc.contributor.editorPollefeys, Marc
dc.contributor.editorTang, Xiaoou
dc.coverage.spatialSeoul South Korea
dc.date.accessioned2023-07-24T23:42:22Z
dc.date.createdOct 27-Nov 2 2019
dc.date.issued2019
dc.date.updated2022-05-29T08:16:33Z
dc.description.abstractZero-shot object detection (ZSD) is a relatively unexplored research problem as compared to the conventional zero-shot recognition task. ZSD aims to detect previously unseen objects during inference. Existing ZSD works suffer from two critical issues: (a) large domain-shift between the source (seen) and target (unseen) domains since the two distributions are highly mismatched. (b) the learned model is biased against unseen classes, therefore in generalized ZSD settings, where both seen and unseen objects co-occur during inference, the learned model tends to misclassify unseen to seen categories. This brings up an important question: How effectively can a transductive setting address the aforementioned problems? To the best of our knowledge, we are the first to propose a transductive zero-shot object detection approach that convincingly reduces the domain-shift and model-bias against unseen classes. Our approach is based on a self-learning mechanism that uses a novel hybrid pseudo-labeling technique. It progressively updates learned model parameters by associating unlabeled data samples to their corresponding classes. During this process, our technique makes sure that knowledge that was previously acquired on the source domain is not forgotten. We report significant 'relative' improvements of 34.9% and 77.1% in terms of mAP and recall rates over the previous best inductive models on MSCOCO dataset.en_AU
dc.description.sponsorshipThis work was supported in part by NH&MRC Project grant #1082358en_AU
dc.format.mimetypeapplication/pdfen_AU
dc.identifier.isbn9781728148038en_AU
dc.identifier.urihttp://hdl.handle.net/1885/294523
dc.language.isoen_AUen_AU
dc.publisherIEEE, Institute of Electrical and Electronics Engineersen_AU
dc.relationhttp://purl.org/au-research/grants/nhmrc/1082358en_AU
dc.relation.ispartofseries2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019en_AU
dc.rights© 2019 IEEEen_AU
dc.sourceProceedings of the 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019en_AU
dc.titleTransductive learning for zero-shot object detectionen_AU
dc.typeConference paperen_AU
local.bibliographicCitation.lastpage6090en_AU
local.bibliographicCitation.startpage6081en_AU
local.contributor.affiliationRahman, Shafin, College of Engineering and Computer Science, ANUen_AU
local.contributor.affiliationKhan, Salman, Academic Portfolio, ANUen_AU
local.contributor.affiliationBarnes, Nick, College of Engineering and Computer Science, ANUen_AU
local.contributor.authoruidRahman, Shafin, u5929575en_AU
local.contributor.authoruidKhan, Salman, u1029115en_AU
local.contributor.authoruidBarnes, Nick, u4591576en_AU
local.description.embargo2099-12-31
local.description.notesImported from ARIESen_AU
local.description.refereedYes
local.identifier.absfor460300 - Computer vision and multimedia computationen_AU
local.identifier.ariespublicationa383154xPUB11590en_AU
local.identifier.doi10.1109/ICCV.2019.00618en_AU
local.identifier.scopusID2-s2.0-85081924703
local.identifier.thomsonIDWOS:000548549201020
local.publisher.urlhttps://www.ieee.org/en_AU
local.type.statusPublished Versionen_AU

Downloads

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Transductive_Learning_for_Zero-Shot_Object_Detection.pdf
Size:
603.46 KB
Format:
Adobe Portable Document Format
Description: