3D scene generation for zero-shot learning using ChatGPT guided language prompts
| dc.contributor.author | Ahmadi, Sahar | en |
| dc.contributor.author | Cheraghian, Ali | en |
| dc.contributor.author | Chowdhury, Townim Faisal | en |
| dc.contributor.author | Saberi, Morteza | en |
| dc.contributor.author | Rahman, Shafin | en |
| dc.date.accessioned | 2025-05-23T07:25:33Z | |
| dc.date.available | 2025-05-23T07:25:33Z | |
| dc.date.issued | 2024 | en |
| dc.description.abstract | Zero-shot learning in the realm of 3D point cloud data remains relatively unexplored compared to its 2D image counterpart. This domain introduces fresh challenges due to the absence of robust pre-trained feature extraction models. To tackle this, we introduce a prompt-guided method for 3D scene generation and supervision, enhancing the network's ability to comprehend the intricate relationships between seen and unseen objects. Initially, we utilize basic prompts resembling scene annotations generated from one or two point cloud objects. Recognizing the limited diversity of basic prompts, we employ ChatGPT to expand them, enriching the contextual information within the descriptions. Subsequently, leveraging these descriptions, we arrange point cloud objects’ coordinates to fabricate augmented 3D scenes. Lastly, employing contrastive learning, we train our proposed architecture end-to-end, utilizing pairs of 3D scenes and prompt-based captions. We posit that 3D scenes facilitate more efficient object relationships than individual objects, as demonstrated by the effectiveness of language models like BERT in contextual understanding. Our prompt-guided scene generation method amalgamates data augmentation and prompt-based annotation, thereby enhancing 3D ZSL performance. We present ZSL and generalized ZSL results on both synthetic (ModelNet40, ModelNet10, and ShapeNet) and real-scanned (ScanOjbectNN) 3D object datasets. Furthermore, we challenge the model by training with synthetic data and testing with real-scanned data, achieving state-of-the-art performance compared to existing 2D and 3D ZSL methods in the literature. Codes and models are available at: https://github.com/saharahmadisohraviyeh/ChatGPT_ZSL_3D. | en |
| dc.description.sponsorship | This work was supported by the Conference Travel and Research Grants (CTRG) 2023\u20132024 from North South University, under Grant ID: CTRG-23-SEPS-20. | en |
| dc.description.status | Peer-reviewed | en |
| dc.identifier.issn | 1077-3142 | en |
| dc.identifier.scopus | 85208261901 | en |
| dc.identifier.uri | http://www.scopus.com/inward/record.url?scp=85208261901&partnerID=8YFLogxK | en |
| dc.identifier.uri | https://hdl.handle.net/1885/733751735 | |
| dc.language.iso | en | en |
| dc.rights | Publisher Copyright: © 2024 Elsevier Inc. | en |
| dc.source | Computer Vision and Image Understanding | en |
| dc.subject | Contrastive learning | en |
| dc.subject | Deep learning | en |
| dc.subject | Point cloud object | en |
| dc.subject | Zero-shot learning | en |
| dc.title | 3D scene generation for zero-shot learning using ChatGPT guided language prompts | en |
| dc.type | Journal article | en |
| dspace.entity.type | Publication | en |
| local.contributor.affiliation | Cheraghian, Ali; School of Computing, ANU College of Systems and Society, The Australian National University | en |
| local.contributor.affiliation | Chowdhury, Townim Faisal; University of Adelaide | en |
| local.contributor.affiliation | Saberi, Morteza; University of Technology Sydney | en |
| local.contributor.affiliation | Rahman, Shafin; North South University | en |
| local.identifier.citationvolume | 249 | en |
| local.identifier.doi | 10.1016/j.cviu.2024.104211 | en |
| local.identifier.pure | 45b06ca5-0ab8-4d4f-8f84-955bbb527f6a | en |
| local.identifier.url | https://www.scopus.com/pages/publications/85208261901 | en |
| local.type.status | Published | en |