Model-Free Multiple Object Tracking with Shared Proposals

dc.contributor.authorZhu, Gao
dc.contributor.authorPorikli, Fatih
dc.contributor.authorLi, Hongdong
dc.contributor.editorLai, S-H
dc.contributor.editorLepetit, V
dc.contributor.editorNishino, K
dc.contributor.editorSato, Y
dc.coverage.spatialTaipei, Taiwan
dc.date.accessioned2021-08-10T03:50:09Z
dc.date.createdNovember 20-24 2016
dc.date.issued2017
dc.date.updated2020-11-23T10:50:12Z
dc.description.abstractMost previous methods for tracking of multiple objects follow the conventional “tracking by detection” scheme and focus on improving the performance of category-specific object detectors as well as the between-frame tracklet association. These methods are therefore heavily sensitive to the performance of the object detectors, leading to limited application scenarios. In this work, we overcome this issue by a novel model-free framework that incorporates generic category-independent object proposals without the need to pretrain any object detectors. In each frame, our method generates a small number of target object proposals that are shared by multiple objects regardless of their category. This significantly improves the search efficiency in comparison to the traditional dense sampling approach. To further increase the discriminative power of our tracker among targets, we treat all other object proposals as the negative samples, i.e. as “distractors”, and update them in an online fashion. For a comprehensive evaluation, we test on the PETS benchmark datasets as well as a new MOOT benchmark dataset that contains more challenging videos. Results show that our method achieves superior performance in terms of both computational speed and tracking accuracy metrics.en_AU
dc.description.sponsorshipThis work was supported under the Australian Research Council’s Discovery Projects funding scheme (project DP150104645, DP120103896), Linkage Projects funding scheme (LP100100588), ARC Centre of Excellence on Robotic Vision (CE140100016).en_AU
dc.format.mimetypeapplication/pdfen_AU
dc.identifier.isbn9783319541808en_AU
dc.identifier.issn0302-9743
dc.identifier.urihttp://hdl.handle.net/1885/243859
dc.language.isoen_AUen_AU
dc.publisherSpringer International Publishing AGen_AU
dc.relationhttp://purl.org/au-research/grants/arc/DP150104645en_AU
dc.relationhttp://purl.org/au-research/grants/arc/DP120103896en_AU
dc.relationhttp://purl.org/au-research/grants/arc/LP100100588en_AU
dc.relationhttp://purl.org/au-research/grants/arc/CE140100016en_AU
dc.relation.ispartofseries13th Asian Conference on Computer Vision, ACCV 2016en_AU
dc.relation.ispartofseriesLecture Notes in Computer Science
dc.rights© Springer International Publishing AG 2017en_AU
dc.titleModel-Free Multiple Object Tracking with Shared Proposalsen_AU
dc.typeConference paperen_AU
local.bibliographicCitation.lastpage304en_AU
local.bibliographicCitation.startpage288en_AU
local.contributor.affiliationZhu, Gao, College of Engineering and Computer Science, ANUen_AU
local.contributor.affiliationPorikli, Fatih, College of Engineering and Computer Science, ANUen_AU
local.contributor.affiliationLi, Hongdong, College of Engineering and Computer Science, ANUen_AU
local.contributor.authoremailu5405232@anu.edu.auen_AU
local.contributor.authoruidZhu, Gao, u5155914en_AU
local.contributor.authoruidPorikli, Fatih, u5405232en_AU
local.contributor.authoruidLi, Hongdong, u4056952en_AU
local.description.embargo2099-12-31
local.description.notesImported from ARIESen_AU
local.description.refereedYes
local.identifier.absfor080104 - Computer Visionen_AU
local.identifier.ariespublicationu5357342xPUB98en_AU
local.identifier.doi10.1007/978-3-319-54184-6_18en_AU
local.identifier.essn1611-3349
local.identifier.scopusID2-s2.0-85016169517
local.identifier.uidSubmittedByu5357342en_AU
local.publisher.urlhttps://link.springer.com/en_AU
local.type.statusPublished Versionen_AU

Downloads

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
01_Zhu_Model-Free_Multiple_Object_2017.pdf
Size:
519.26 KB
Format:
Adobe Portable Document Format