We introduce the Reality-linked 3D Scenes (R3DS) dataset of synthetic 3D scenes mirroring the real-world scene arrangements from Matterport3D. Compared to prior work, R3DS has more complete and densely populated scenes with objects linked to real-world observations in panoramas. R3DS also provides an object support hierarchy, and matching object sets (e.g., same chairs around a dining table) for each scene.
Overall, R3DS contains 19K objects represented by 3784 distinct CAD models from over 100 object categories. We demonstrate the effectiveness of R3DS on the Panoramic Scene Understanding task. We find that: 1) training on R3DS enables better generalization; 2) support relation prediction trained with R3DS improves performance compared to heuristically calculated support; and 3) R3DS offers a challenging benchmark for future work on panoramic scene understanding.
Examples of R3DS scenes colored by object instances, along with paired 360 degree observing videos from synthetic scenes and real scans using the same camera pose.
Examples of R3DS scenes colored by annotated CAD models that shows matched object instance sets are one of the main characteristics of R3DS scenes.
As R3DS scenes are linked to real panoramas, we can easily obtain textured 3D architectures from projection where objects are colored by semantic categories in below exampels.
We demonstrate the value of R3DS on the Panoramic Scene Understanding task. We show that the more complete layouts and support relations in our dataset enable better performance and generalization. Our dataset offers a challenging benchmark for future work in scene understanding.
@article{wu2024r3ds,
title={R3DS: Reality-linked 3D Scenes for Panoramic Scene Understanding},
author={Qirui Wu and Sonia Raychaudhuri and Daniel Ritchie and Manolis Savva and Angel X Chang},
year={2024},
eprint={2403.12301},
archivePrefix={arXiv},
primaryClass={cs.CV}
}