Skip navigation
Skip navigation

Cross-view and multi-view gait recognitions based on view transformation model using multi-layer perceptron

Kusakunniran, Worapan; Wu, Qiang; Zhang, Jian; Li, Hongdong

Description

Gait has been shown to be an efficient biometric feature for human identification at a distance. However, performance of gait recognition can be affected by view variation. This leads to a consequent difficulty of cross-view gait recognition. A novel method is proposed to solve the above difficulty by using view transformation model (VTM). VTM is constructed based on regression processes by adopting multi-layer perceptron (MLP) as a regression tool. VTM estimates gait feature from one view...[Show more]

dc.contributor.authorKusakunniran, Worapan
dc.contributor.authorWu, Qiang
dc.contributor.authorZhang, Jian
dc.contributor.authorLi, Hongdong
dc.date.accessioned2015-12-10T23:27:11Z
dc.identifier.issn0167-8655
dc.identifier.urihttp://hdl.handle.net/1885/68107
dc.description.abstractGait has been shown to be an efficient biometric feature for human identification at a distance. However, performance of gait recognition can be affected by view variation. This leads to a consequent difficulty of cross-view gait recognition. A novel method is proposed to solve the above difficulty by using view transformation model (VTM). VTM is constructed based on regression processes by adopting multi-layer perceptron (MLP) as a regression tool. VTM estimates gait feature from one view using a well selected region of interest (ROI) on gait feature from another view. Thus, trained VTMs can normalize gait features from across views into the same view before gait similarity is measured. Moreover, this paper proposes a new multi-view gait recognition which estimates gait feature on one view using selected gait features from several other views. Extensive experimental results demonstrate that the proposed method significantly outperforms other baseline methods in literature for both cross-view and multi-view gait recognitions. In our experiments, particularly, average accuracies of 99%, 98% and 93% are achieved for multiple views gait recognition by using 5 cameras, 4 cameras and 3 cameras respectively.
dc.publisherElsevier
dc.sourcePattern Recognition Letters
dc.subjectKeywords: Cross-view; Gait recognition; Multi layer perceptron; Multi-views; View transformations; Biometrics; Cameras; Gait analysis Cross-view; Gait recognition; Multi-layer perceptron; Multi-view; View transformation model
dc.titleCross-view and multi-view gait recognitions based on view transformation model using multi-layer perceptron
dc.typeJournal article
local.description.notesImported from ARIES
local.identifier.citationvolume33
dc.date.issued2012
local.identifier.absfor080104 - Computer Vision
local.identifier.ariespublicationf2965xPUB1622
local.type.statusPublished Version
local.contributor.affiliationKusakunniran, Worapan, University of New South Wales
local.contributor.affiliationWu, Qiang, University of Technology Sydney
local.contributor.affiliationZhang, Jian, University of New South Wales
local.contributor.affiliationLi, Hongdong, College of Engineering and Computer Science, ANU
local.description.embargo2037-12-31
local.bibliographicCitation.issue7
local.bibliographicCitation.startpage882
local.bibliographicCitation.lastpage889
local.identifier.doi10.1016/j.patrec.2011.04.014
local.identifier.absseo970108 - Expanding Knowledge in the Information and Computing Sciences
dc.date.updated2016-02-24T08:15:44Z
local.identifier.scopusID2-s2.0-84858442392
local.identifier.thomsonID000302973700010
CollectionsANU Research Publications

Download

File Description SizeFormat Image
01_Kusakunniran_Cross-view_and_multi-view_gait_2012.pdf763.25 kBAdobe PDF    Request a copy


Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  17 November 2022/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator