Multi-modal Situated Reasoning in 3D Scenes

NuerIPS 2024 Datasets and Benchmarks

1Beijing Institute for General Artificial Intelligence, 2Peking University
* indicates equal contribution

Abstract

Situation awareness is essential for understanding and reasoning about 3D scenes in embodied AI agents. However, existing datasets and benchmarks for situated understanding suffer from severe limitations in data modality, scope, diversity, and scale.

To address these limitations, we propose Multi-modal Situated Question Answering (MSQA), a large-scale multi-modal situated reasoning dataset, scalably collected leveraging 3D scene graphs and vision-language models (VLMs) across a diverse range of real-world 3D scenes. MSQA includes 251K situated question-answering pairs across 9 distinct question categories, covering complex scenarios and object modalities within 3D scenes. We introduce a novel interleaved multi-modal input setting in our benchmark to provide both texts, images, and point clouds for situation and question description, aiming to resolve ambiguity in describing situations with single-modality inputs (\eg, texts).

Additionally, we devise the Multi-modal Next-step Navigation (MSNN) benchmark to evaluate models' grounding of actions and transitions between situations. Comprehensive evaluations on reasoning and navigation tasks highlight the limitations of existing vision-language models and underscore the importance of handling multi-modal interleaved inputs and situation modeling. Experiments on data scaling and cross-domain transfer further demonstrate the effectiveness of leveraging MSQA as a pre-training dataset for developing more powerful situated reasoning models, contributing to advancements in 3D scene understanding for embodied AI.

Benchmarks

An overview of benchmarking tasks in MSR3D. We use green boxes for objects mentioned in situation descriptions, red for objects in questions, and purple for objects in navigation instructions.

Detailed overview of benchmarking tasks

Data Distribution

Question distribution of MSQA.

Question distribution

MSR3D

MSR3D accepts 3D point cloud, text-image interleaved situation, location and orientation and question as multi-modal input.

Baseline model

Data Collection Pipeline

An overview of our data collection pipeline, including situated scene graph generation, situated QA pairs generation, and various post-processing procedures.

Detailed overview of benchmarking tasks

Data Sample

Scene
    Situation

Situation

Question

Note: Click the select dropdown to select a scene and a corresponding situation in the scene. Drag to move your view around.