Joint Embedding of 3D Scan and CAD Objects
ICCV 2019

Abstract

3D scan geometry and CAD models often contain complementary information towards understanding environments, which could be leveraged through establishing a mapping between the two domains. However, this is a challenging task due to strong, lower-level differences between scan and CAD geometry. We propose a novel approach to learn a joint embedding space between scan and CAD geometry, where semantically similar objects from both domains lie close together. To achieve this, we introduce a new 3D CNN-based approach to learn a joint embedding space representing object similarities across these domains. To learn a shared space where scan objects and CAD models can interlace, we propose a stacked hourglass approach to separate foreground and background from a scan object, and transform it to a complete, CAD-like representation to produce a shared embedding space. This embedding space can then be used for CAD model retrieval; to further enable this task, we introduce a new dataset of ranked scan-CAD similarity annotations, enabling new, fine-grained evaluation of CAD model retrieval to cluttered, noisy, partial scans. Our learned joint embedding outperforms current state of the art for CAD model retrieval by 12% in instance retrieval accuracy.



Video


If you find our work useful, please consider citing it as follows:


@inproceedings{dahnert2019embedding,
    title={Joint Embedding of 3D Scan and CAD Objects},
    author={Dahnert, Manuel and Dai, Angela and Guibas, Leonidas and Nie{\ss}ner, Matthias},
	booktitle={ICCV 2019},
	year={2019}
}