3DInAction: Understanding Human Actions in 3D Point Clouds
| dc.contributor.author | Ben-Shabat, Yizhak | en |
| dc.contributor.author | Shrout, Oren | en |
| dc.contributor.author | Gould, Stephen | en |
| dc.date.accessioned | 2025-05-23T15:21:44Z | |
| dc.date.available | 2025-05-23T15:21:44Z | |
| dc.date.issued | 2024 | en |
| dc.description.abstract | We propose a novel method for 3D point cloud action recognition. Understanding human actions in RGB videos has been widely studied in recent years, however, its 3D point cloud counterpart remains under-explored despite the clear value that 3D information may bring. This is mostly due to the inherent limitation of the point cloud data modality—lack of structure, permutation invariance, and varying number of points—which makes it difficult to learn a spatio-temporal representation. To address this limitation, we propose the 3DinAction pipeline that first estimates patches moving in time (t-patches) as a key building block, alongside a hierarchical architecture that learns an informative spatio-temporal representation. We show that our method achieves improved performance on existing datasets, including DFAUST and IKEA ASM. Code is publicly available at https://github.com/sitzikbs/3dincaction. | en |
| dc.description.sponsorship | This project has received funding from the European Union\u2019s Horizon 2020 research and innovation pro-gramme under the Marie Sklodowska-Curie grant agreement No 893465. We also thank the Microsoft for Azure Credits and NVIDIA Academic Hardware Grant Program for providing high-speed A5000 GPU. | en |
| dc.description.status | Peer-reviewed | en |
| dc.format.extent | 10 | en |
| dc.identifier.issn | 1063-6919 | en |
| dc.identifier.scopus | 85218340761 | en |
| dc.identifier.uri | http://www.scopus.com/inward/record.url?scp=85218340761&partnerID=8YFLogxK | en |
| dc.identifier.uri | https://hdl.handle.net/1885/733752528 | |
| dc.language.iso | en | en |
| dc.relation.ispartofseries | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 | en |
| dc.rights | Publisher Copyright: © 2024 IEEE. | en |
| dc.source | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | en |
| dc.subject | 3D action recognition | en |
| dc.subject | point clouds | en |
| dc.subject | spatio-temporal representation | en |
| dc.subject | temporal patches | en |
| dc.title | 3DInAction: Understanding Human Actions in 3D Point Clouds | en |
| dc.type | Conference paper | en |
| dspace.entity.type | Publication | en |
| local.bibliographicCitation.lastpage | 19987 | en |
| local.bibliographicCitation.startpage | 19978 | en |
| local.contributor.affiliation | Ben-Shabat, Yizhak; School of Computing, ANU College of Systems and Society, The Australian National University | en |
| local.contributor.affiliation | Shrout, Oren; Technion-Israel Institute of Technology | en |
| local.contributor.affiliation | Gould, Stephen; School of Computing, ANU College of Systems and Society, The Australian National University | en |
| local.identifier.doi | 10.1109/CVPR52733.2024.01888 | en |
| local.identifier.pure | 472b73a6-760c-435c-a00e-bd7c523235bb | en |
| local.identifier.url | https://www.scopus.com/pages/publications/85218340761 | en |
| local.type.status | Published | en |