Failure: RVT
TL;DR: Visuomotor policies trained with rich language instructions and failure recovery behaviors demonstrate superior robustness and adaptability.
Rich language instructions provide more comprehensive details for failure recovery, such as failure analysis, spatial movements, target object attributes, and the expected outcome, and can guide the policy with more accurate control while serving as a form of regularization to prevent overfitting and improve generalization.
Developing robust and correctable visuomotor policies for robotic manipulation is challenging due to the lack of self-recovery mechanisms from failures and the limitations of simple language instructions in guiding robot actions. To address these issues, we propose a scalable data generation pipeline that automatically augments expert demonstrations with failure recovery trajectories and fine-grained language annotations for training. We then introduce Rich languAge-guided failure reCovERy (RACER), a supervisor-actor framework, which combines failure recovery data with rich language descriptions to enhance robot control. RACER features a vision-language model (VLM) that acts as an online supervisor, providing detailed language guidance for error correction and task execution, and a language-conditioned visuomotor policy as an actor to predict the next actions. Our experimental results show that RACER outperforms the state-of-the-art Robotic View Transformer (RVT) on RLbench across various evaluation settings, including standard long-horizon tasks, dynamic goal-change tasks and zero-shot unseen tasks, achieving superior performance in both simulated and real world environments.
RACER is a flexible supervisor-actor framework that enhances robotic manipulation through language guidance for failure recovery.
We propose a scalable rich language-guided failure recovery data augmentation pipeline to collect new trajectories from expert demos.
We evaluate the RVT baseline, RACER-scratch (RACER trained with rich instructions on real-world data only), RACER-simple (RACER trained with simple instructions on both simulated and real-world data), and RACER-rich (RACER trained with rich instructions on both simulated and real-world data).
(→ means there is task goal online change to test model robustness.)
Failure: RVT
Failure: RACER-scratch
Failure: RACER-simple
Success: RACER-rich
Failure: RVT
Failure: RACER-scratch
Failure: RACER-simple
Success: RACER-rich
Failure: RVT
Failure: RACER-scratch
Failure: RACER-simple
Success: RACER-rich
Failure: RVT
Failure: RACER-scratch
Failure: RACER-simple
Success: RACER-rich
Failure: RVT
Success: RACER
Failure: RVT
Success: RACER
Failure: RVT
Success: RACER
Failure: RVT
Success: RACER
We introduce a novel setting to assess the model's robustness by deliberately switching the task goal during execution.
Failure: RVT
Success: RACER
Failure: RVT
Success: RACER
Failure: RVT
Success: RACER
Failure: RVT
Success: RACER
Failure: RVT
Success: RACER
Failure: RVT
Success: RACER
Failure: RVT
Success: RACER
Failure: RVT
Success: RACER
Failure: RACER
Success: RACER + H
Failure: RACER
Success: RACER + H
Failure: RACER
Success: RACER + H
Failure: RACER
Success: RACER + H
Failure: RACER
Failure: RACER
Failure: RACER
Failure: RACER
@misc{dai2024racerrichlanguageguidedfailure,
title={RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning},
author={Yinpei Dai and Jayjun Lee and Nima Fazeli and Joyce Chai},
year={2024},
eprint={2409.14674},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2409.14674},
}