Differentiable Parsing and Visual Grounding of Natural Language Instructions for Object Placement

ICRA 2023
Zirui Zhao, Wee Sun Lee, David Hsu
School of Computing, National University of Singapore
[paper] [appendix] [code]

Abstract

We present a new method, PARsing And visual GrOuNding (PARAGON), for grounding natural language in object placement tasks. Natural language generally describes objects and spatial relations with compositionality and ambiguity, two major obstacles to effective language grounding. For compositionality, Paragon parses a language instruction into an object-centric graph representation to ground objects individually. For ambiguity, Paragon uses a novel particle-based graph neural network to reason about object placements with uncertainty. Essentially, Paragon integrates a parsing algorithm into a probabilistic, data-driven learning framework. It is fully differentiable and trained end-to-end from data for robustness against complex, ambiguous language input.

ParaGon

ParaGon integrates parsing-based methods into a probabilistic, data-driven framework. It exhibits robustness from its data-driven property, as well as generalizability and data efficiency due to its parsing-based nature.

ParaGon reduces the complexity of embedding-based grounding by parsing complex sentences into simple, compositional structures and learns generalizable parsing rules from data to improve robustness. It further adapts to the uncertainty of ambiguous instructions using particle-based probabilistic techniques. The experiments show that ParaGon outperforms the state-of-the-art method in language-conditioned object placement tasks in the presence of ambiguity and compositionality. Our experiments involve real, noisy human instructions and photo-realistic visual data, reflecting the difficulties of language grounding in realistic situations to show ParaGon's potential for real-world application.

Citation

If ParaGon is useful in your research, please cite: