Recently, Vision Language Models (VLMs) have experienced significant advancements, yet these models still face challenges in spatial hierarchical reasoning within indoor scenes.
In this study, we introduce ROOT, a VLM-based system designed to enhance the analysis of indoor scenes. Specifically, we first develop an iterative object perception algorithm using GPT-4V to detect object entities within indoor scenes. This is followed by employing vision foundation models to acquire additional meta-information about the scene, such as bounding boxes. Building on this foundational data, we propose a specialized VLM, SceneVLM, which is capable of generating spatial hierarchical scene graphs and providing distance information for objects within indoor environments. This information enhances our understanding of the spatial arrangement of indoor scenes. To train our SceneVLM, we collect over 610,000 images from various public indoor datasets and implement a scene data generation pipeline with a semi-automated technique to establish relationships and estimate distances among indoor objects. By utilizing this enriched data, we conduct various training recipes and finish SceneVLM.
Our experiments demonstrate that ROOT facilitates indoor scene understanding and proves effective in diverse downstream applications, such as 3D scene generation and embodied AI. The code will be released at https://github.com/harrytea/ROOT.
Below we showcase a detailed example of Scene Generation with Holodeck. For other applications, please refer to our paper.
ROOT
Holodeck
ROOT's scene understanding capabilities can potentially work together with Holodeck for scene generation. By providing spatial relationship information and object placement suggestions, ROOT aims to assist Holodeck in generating indoor environments.
@article{wang2024root,
title={ROOT: VLM based System for Indoor Scene Understanding and Beyond},
author={Wang, Yonghui and Chen, Shi-Yong and Zhou, Zhenxing and Li, Siyi and Li, Haoran and Zhou, Wengang and Li, Houqiang},
journal={arXiv preprint arXiv:2411.15714},
year={2024}
}