Learning Deep Feature Representations of 3D Point Cloud Data

dc.contributor.authorQiu, Shi
dc.date.accessioned2023-07-25T14:53:44Z
dc.date.available2023-07-25T14:53:44Z
dc.date.issued2023
dc.description.abstractAs 3D data acquisition techniques develop rapidly, different types of 3D scanners, like LiDAR scanners and RGB-D cameras, are becoming popular in our daily life. More importantly, 3D scanners can capture data that enables intelligent machines to better see and recognize the world. As a fundamental 3D data representation, point clouds can be easily collected using 3D scanners, retaining abundant information for AI-driven applications such as autonomous driving, virtual/augmented reality, and robotics. Given the prominence of deep neural networks in current days, deep learning-based point cloud data understanding is playing an essential role in 3D computer vision research. In this thesis, we focus on learning deep feature representations of point clouds for 3D data processing and analysis. Basically, we start from investigating low-level vision problems of 3D point clouds, which helps to comprehend and deal with the inherent sparsity, irregularity and unorderedness of this 3D data type. On this front, we introduce a novel transformer-based model that fully utilizes the dependencies between scattered points for high-fidelity point cloud upsampling. Moreover, we deeply explore high-level vision problems of point cloud analysis, including the classification, segmentation and detection tasks. Specifically, we propose to (i) learn more geometric information for accurate point cloud classification by enriching geometric context in low-level 3D space and high-level feature spaces with an attention-based error-correcting feedback structure; (ii) exploit dense-resolution features for small-scale point cloud recognition using a novel local grouping method and a local feature error-minimizing module; (iii) augment local context for large-scale point cloud analysis via a bilateral local context augmentation structure and an adaptive feature fusion method; and (iv) refine the basic point feature representations with a plug-and-play module that incorporates both local context and global bilinear response to benefit various point cloud recognition problems and different baseline models. By conducting comprehensive experiments, ablation studies and visualizations, we quantitatively and qualitatively demonstrate our contributions in the deep learning-based research of 3D point clouds. In general, this thesis presents a review of deep learning-based 3D point cloud research, introduces our contributions in learning deep feature representations of point cloud data, and proposes research directions for future work. Apart from serving as a PhD work summary, we expect this thesis to inspire further exploration into 3D computer vision and its applications.
dc.identifier.urihttp://hdl.handle.net/1885/294551
dc.language.isoen_AU
dc.titleLearning Deep Feature Representations of 3D Point Cloud Data
dc.typeThesis (PhD)
local.contributor.affiliationANU College of Engineering, Computing and Cybernetics, The Australian National University
local.contributor.authoremailu6548414@anu.edu.au
local.contributor.supervisorBarnes, Nicholas
local.contributor.supervisorcontactu4591576@anu.edu.au
local.identifier.doi10.25911/QS52-VX60
local.identifier.proquestYes
local.mintdoimint
local.thesisANUonly.author74f93947-5870-4b53-b476-120006af9c74
local.thesisANUonly.keye12834f5-e2a0-7856-3e12-211fc75f177d
local.thesisANUonly.title000000022128_TC_1

Downloads

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ShiQIU_final_thesis_2023.pdf
Size:
20.95 MB
Format:
Adobe Portable Document Format
Description:
Thesis Material