Second-order Temporal Pooling for Action Recognition

Loading...
Thumbnail Image

Authors

Cherian, Anoop
Gould, Stephen

Journal Title

Journal ISSN

Volume Title

Publisher

Springer

Abstract

Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics.Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.

Description

Citation

Cherian, A., Gould, S. Second-order Temporal Pooling for Action Recognition. Int J Comput Vis 127, 340–362 (2019). https://doi.org/10.1007/s11263-018-1111-5

Source

International Journal of Computer Vision

Book Title

Entity type

Access Statement

Open Access

License Rights

Restricted until

Downloads

File
Description
Author Accepted Manuscript