Ctrl-World is designed for policy-in-the-loop rollouts with generalist robot policies. It generates joint multi-view predictions (including wrist views), enforces fine-grained action control via frame-level conditioning, and sustains coherent long-horizon dynamics through pose-conditioned memory retrieval. Together, these components enable (1) accurate evaluation of policy instruction-following ability via imagination, and (2) targeted policy improvement on previously unseen instructions.
Starting from the same initial frame, Ctrl-World can autoregressively generate diverse future trajectories conditioned on the given action chunks, achieving centimeter-level precision. You can select any action combinations and generate corresponding videos. All videos are generated by passing in the initial frame and a different sequences of actions as input. For interpretability, we translate each action chunk into a text description of the action.
(e.g., left, right, top right, bottom side)
(E.g., smaller,larger block)