3DVQA: Visual Question Answering for 3D Environments


Yasaman Etesam, Leon Kochiev, Angel X. Chang

Code Paper

Overview



Visual Question Answering (VQA) is a widely studied problem in computer vision and natural language processing. However, current approaches to VQA have been investigated primarily in the 2D image domain. We study VQA in the 3D domain, with our input being point clouds of realworld 3D scenes, instead of 2D images. We believe that this 3D data modality provide richer spatial relation information that is of interest in the VQA task. In this paper, we introduce the 3DVQA-ScanNet dataset, the first VQA dataset in 3D, and we investigate the performance of a spectrum of baseline approaches on the 3D VQA task.


Dataset statistics for our 3DVQA dataset.


Illustration of the network structure for our VoteNet + LSTM.

Paper


Download


Below one can download the full and sampled versions of 3DVQA dataset

coming soon!

Bibtex


@inproceedings{etesam20223dvqa, author = {Etesam, Yasaman and Kochiev, Leon and Chang, Angel X.}, title = {3DVQA: Visual Question Answering for 3D Environments}, booktitle = {Conference on Robots and Vision (CRV)}, year = 2022 }

Acknowledgements


We thank Weilian Song , Ekaterina Fedluyk and Gleb Kumichev for fruitful discussions. This work was supported by the Canada CIFAR AI Chair program and an NSERC Discovery Grant. This research was enabled in part by support provided by WestGrid and Compute Canada