We present a new method, PARsing And visual GrOuNding (PARAGON), for grounding natural language in object placement tasks. Natural language generally describes objects and spatial relations with compositionality and ambiguity, two major obstacles to effective language grounding. For compositionality, Paragon parses a language instruction into an object-centric graph representation to ground objects individually. For ambiguity, Paragon uses a novel particle-based graph neural network to reason about object placements with uncertainty. Essentially, Paragon integrates a parsing algorithm into a probabilistic, data-driven learning framework. It is fully differentiable and trained end-to-end from data for robustness against complex, ambiguous language input.
ParaGon integrates parsing-based methods into a probabilistic, data-driven framework. It exhibits robustness from its data-driven property, as well as generalizability and data efficiency due to its parsing-based nature.
ParaGon reduces the complexity of embedding-based grounding by parsing complex sentences into simple, compositional structures and learns generalizable parsing rules from data to improve robustness. It further adapts to the uncertainty of ambiguous instructions using particle-based probabilistic techniques. The experiments show that ParaGon outperforms the state-of-the-art method in language-conditioned object placement tasks in the presence of ambiguity and compositionality. Our experiments involve real, noisy human instructions and photo-realistic visual data, reflecting the difficulties of language grounding in realistic situations to show ParaGon's potential for real-world application.
If ParaGon is useful in your research, please cite:
x
@INPROCEEDINGS{zhao2023paragon,
author={Zhao, Zirui and Lee, Wee Sun and Hsu, David},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
title={Differentiable Parsing and Visual Grounding of Natural Language Instructions for Object Placement},
year={2023},
volume={},
number={},
pages={11546-11553},
doi={10.1109/ICRA48891.2023.10160640}}