Skip navigation
Skip navigation

Video anomaly detection and localization by local motion based joint video representation and OCELM

Wang, Siqi; Zhu, En; Yin, Jianping; Porikli, Fatih

Description

Nowadays, human-based video analysis becomes increasingly exhausting due to the ubiquitous use of surveillance cameras and explosive growth of video data. This paper proposes a novel approach to detect and localize video anomalies automatically. For video feature extraction, video volumes are jointly represented by two novel local motion based video descriptors, SL-HOF and ULGP-OF. SL-HOF descriptor captures the spatial distribution information of 3D local regions’ motion in the spatio-temporal...[Show more]

dc.contributor.authorWang, Siqi
dc.contributor.authorZhu, En
dc.contributor.authorYin, Jianping
dc.contributor.authorPorikli, Fatih
dc.date.accessioned2018-01-11T01:11:23Z
dc.identifier.issn0925-2312
dc.identifier.urihttp://hdl.handle.net/1885/139161
dc.description.abstractNowadays, human-based video analysis becomes increasingly exhausting due to the ubiquitous use of surveillance cameras and explosive growth of video data. This paper proposes a novel approach to detect and localize video anomalies automatically. For video feature extraction, video volumes are jointly represented by two novel local motion based video descriptors, SL-HOF and ULGP-OF. SL-HOF descriptor captures the spatial distribution information of 3D local regions’ motion in the spatio-temporal cuboid extracted from video, which can implicitly reflect the structural information of foreground and depict foreground motion more precisely than the normal HOF descriptor. To locate the video foreground more accurately, we propose a new Robust PCA based foreground localization scheme. ULGP-OF descriptor, which seamlessly combines the classic 2D texture descriptor LGP and optical flow, is proposed to describe the motion statistics of local region texture in the areas located by the foreground localization scheme. Both SL-HOF and ULGP-OF are shown to be more discriminative than existing video descriptors in anomaly detection. To model features of normal video events, we introduce the newly-emergent one-class Extreme Learning Machine (OCELM) as the data description algorithm. With a tremendous reduction in training time, OCELM can yield comparable or better performance than existing algorithms like the classic OCSVM, which makes our approach easier for model updating and more applicable to fast learning from the rapidly generated surveillance data. The proposed approach is tested on UCSD ped1, ped2 and UMN datasets, and experimental results show that our approach can achieve state-of-the-art results in both video anomaly detection and localization task.
dc.description.sponsorshipThis work was supported by the National Natural Science Foundation of China (Project nos. 60970034, 61170287, 61232016).
dc.format.mimetypeapplication/pdf
dc.publisherElsevier
dc.rights© 2017 Elsevier B.V.
dc.sourceNeurocomputing
dc.subjectVideo anomaly detection and localization
dc.subjectLocal motion based descriptors
dc.subjectExtreme learning machine
dc.titleVideo anomaly detection and localization by local motion based joint video representation and OCELM
dc.typeJournal article
local.identifier.citationvolume277
dc.date.issued2018
local.publisher.urlhttps://www.elsevier.com/
local.type.statusAccepted Version
local.contributor.affiliationPorikli, F., College of Engineering and Computer Science, The Australian National University
local.bibliographicCitation.startpage161
local.bibliographicCitation.lastpage175
local.identifier.doi10.1016/j.neucom.2016.08.156
dcterms.accessRightsOpen Access
dc.provenancehttp://www.sherpa.ac.uk/romeo/issn/0925-2312/..."Author's post-print on open access repository after an embargo period of between 12 months and 48 months" from SHERPA/RoMEO site (as at 11/01/18).
CollectionsANU Research Publications

Download

File Description SizeFormat Image
1-s2.0-S092523121731411X-main.pdf1.87 MBAdobe PDFThumbnail


Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  17 November 2022/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator