Mirror3D: Depth Refinement for Mirror Surfaces

Simon Fraser University

We present the task of 3D mirror plane prediction and depth refinement. First, we annotate several popular RGBD datasets (Matterport3D, ScanNet, NYUv2) with 3D mirror planes. Our benchmarks show that both existing RGBD dataset 'ground truth' raw depth data and state-of-the-art depth estimation and depth completion methods exhibit dramatic errors on mirror surfaces. We propose an architecture for 3D mirror plane estimation that refines depth estimates and produces more reliable reconstructions (compare left and right depth and point cloud pairs from the NYUv2 dataset).


Despite recent progress in depth sensing and 3D reconstruction, mirror surfaces are a significant source of errors. To address this problem, we create the Mirror3D dataset: a 3D mirror plane dataset based on three RGBD datasets (Matterpot3D, NYUv2, and ScanNet) containing 7,011 mirror instance masks and 3D planes. We then develop Mirror3DNet: a module that refines raw sensor depth or estimated depth to correct errors on mirror surfaces. Our key idea is to estimate the 3D mirror plane based on RGB input and surrounding depth context and use this estimate to directly regress mirror surface depth. Our experiments show that Mirror3DNet significantly mitigates errors from a variety of input depth data, including raw sensor depth and depth estimation or completion methods.

Project Video

Dataset Overview

We provide 7,011 mirror instance masks and 3D plane annotation based on three popular RGBD datasets. To obtain the original datasets, please follow the instructions on their official website (link: Matterpot3D, NYUv2, ScanNet).
Here are the visualizations of our Mirror3D dataset. In each image pair, the mirror mask is shown as a transparent red on the RGB image, the mirror plane is in light blue on the point cloud, and erroneous depth points that are incorrectly behind the mirror plane in the raw depth are shaded in orange.

NYUv2 Mirror

Matterport3D Mirror

ScanNet Mirror



Stars: 13

View Source Code

NYUv2 Mirror

Size: 8.5 MB


Matterport3D Mirror

Size: 858 MB


ScanNet Mirror

Size: 100.3 MB


Paper and Bibtex



author = {Tan, Jiaqi and Lin, Weijie and Chang, Angel X and Savva, Manolis},

title = {{Mirror3D}: Depth Refinement for Mirror Surfaces},

booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},

month = {June},

year = {2021}



Jiaqi Tan, Weijie Lin, Angel X. Chang, Manolis Savva Mirror3D: Depth Refinement for Mirror Surfaces CVPR 2021.

*Note: compared to the camera-ready version, we fixed some dataset annotation errors and added some previously unannotated images to the dataset. Therefore, the quantitative and qualitative results have been updated to correspond to the latest version of the dataset.


Many thanks to Yiming Zhang for help in developing annotation verification and visualization tools and the dataset website. We also thank Hanxiao Jiang, Yongsen Mao and Yiming Zhang for their help with dataset annotation. We thank Shitao Tang for helpful early conversations on the Mirror3DNet neural architecture used in our work. We are grateful to the anonymous reviewers for their helpful suggestions. This research was enabled in part by support provided by WestGrid and Compute Canada . Angel X. Chang is supported by a Canada CIFAR AI Chair, and Manolis Savva by a Canada Research Chair and NSERC Discovery Grant.

Last updated: 2021-05-29