From RT-2 (Google DeepMind):
We propose to co-fine-tune state-of-the-art vision-language models on both robotic trajectory data and Internet-scale vision-language tasks, such as visual question answering. In contrast to other approaches, we propose a simple, general recipe to achieve this goal: in order to fit both natural language responses and robotic actions into the same format, we express the actions as text tokens and incorporate them directly into the training set of the model in the same way as natural language tokens. We refer to such category of models as vision-language-action models (VLA) and instantiate an example of such a model, which we call RT-2.
Vision-Language-Action models typically utilize pre-trained vision-language models (VLMs) as their base and are fine-tuned to predict robotic actions. The models that do not rely on pre-trained VLMs are referred to here as Robotic Foundation Models.
We categorize current VLA architectures into two broad families:
- Single‑system VLAs. A single vision–language model handles perception, reasoning, and action prediction. Continuous motor commands are first discretized by a learned action tokenizer (derived from the text tokenizer), and the model simply generates these tokens in sequence.
- Dual‑system VLAs. High‑level understanding and low‑level control are split. A vision–language backbone encodes the current images and instruction, while a dedicated action‑policy network produces continuous commands from those embeddings.
Two principal mechanisms are used to connect the vision–language backbone to the action head:
- Special‑token-bridged. During VLM fine‑tuning, a reserved token such as <ACT> is appended; its final embedding is passed directly to the policy.
- Feature‑pooling-bridged. The full sequence of hidden states is aggregated—via max‑pooling, mean‑pooling, or learned attention—to yield a compact feature vector fed to the policy.
- LoHoVLA: A Unified Vision-Language-Action Model for Long-Horizon Embodied Tasks, arxiv, May 31 2025. [Paper]
- UniVLA: Learning to Act Anywhere with Task-centric Latent Actions, The University of Hong Kong, arxiv, May 9 2025. [Paper] [Code]
- 3D-CAVLA: 3D-CAVLA: Leveraging Depth and 3D Context to Generalize Vision–Language Action Models for Unseen Tasks, New York University, arxiv, May 9 2025. [Paper] [Website]
- NORA: NORA: A SMALL OPEN-SOURCED GENERALIST VISION LANGUAGE ACTION MODEL FOR EMBODIED TASKS, Singapore University of Technology and Design, arxiv, Apr 28 2025. [Paper]
- CoT-VLA: CoT-VLA: Visual Chain-of-Thought Reasoning for Vision-Language-Action Models, NVIDIA & Stanford, arxiv, Mar 27 2025. [Paper] [Website]
- PD-VLA: Accelerating Vision-Language-Action Model Integrated with Action Chunking via Parallel Decoding, HKUST (GZ), arxiv, Mar 4 2025. [Paper]
- VLAS: VLAS: VISION-LANGUAGE-ACTION MODEL WITH SPEECH INSTRUCTIONS FOR CUSTOMIZED ROBOT MANIPULATION, Westlake University, arxiv, Feb 21 2025. [Paper] [Code]
- VLA-Cache: Towards Efficient Vision-Language-Action Model via Adaptive Token Caching in Robotic Manipulation, University of Sydney, arxiv, Feb 4 2025. [Paper]
- Spatial-VLA: SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Models, Shanghai AI Lab, arxiv, Jan 28 2025. [Paper] [Website] [Code] [Model]
- TRACEVLA: TRACEVLA: VISUAL TRACE PROMPTING ENHANCES SPATIAL-TEMPORAL AWARENESS FOR GENERALIST ROBOTIC POLICIES, University of Maryland, arxiv, Dec 25 2024. [Paper]
- OpenVLA: OpenVLA: An Open-Source Vision-Language-Action Model, Stanford University & UC Berkeley & Toyota Research Insititute, arxiv, Jun 13 2024. [Website] [Paper] [Code] [Model]
- RT-2: RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control, Google DeepMind, July 28 2023.
- OpenHelix: A Short Survey, Empirical Analysis, and Open-Source Dual-System VLA Model for Robotic Manipulation, Westlake University, arxiv, May 6 2025. [Paper]
- MoLe-VLA: MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation, Nanjing University & HK PolyU & Peking University, Mar 26 2025. [Paper] [Website] [Code]
- FuSe: Beyond Sight: Finetuning Generalist Robot Policies with Heterogeneous Sensors via Language Grounding, UC Berkeley, arxiv, Jan 8 2025. [Website] [Paper] [Code] [Model]
- Diffusion-VLA: Diffusion-VLA: Scaling Robot Foundation Models via Unified Diffusion and Autoregression, East China Normal University, arxiv, Dec 4 2024. [Website] [Paper]
- CogACT: CogACT: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation, Tsinghua University, arxiv, Nov 29 2024. [Paper] [Website] [Code] [Model]
- SmolVLA: A vision-language-action model for affordable and efficient robotics, Hugging Face, arxiv, Jun 4 2025. [Paper] [Website] [Model]
- OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning, Tsinghua & Shanghai Qi Zhi & Shanghai AI Lab, arxiv, May 17 2025. [Paper] [Website] [Code] [Data]
- $π0.5$: $π0.5$: a Vision-Language-Action Model with Open-World Generalization, Physical Intelligence, arxiv, Apr 22 2025. [Paper] [Website]
- Hi Robot: Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models, Physical Intelligence & Stanford University, arxiv, Feb 26 2025. [Paper] [Website]
- ChatVLA: ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action Model, Midea Group & East China Normal University, arxiv, Feb 21 2025. [Paper] [Website]
- DexVLA: DexVLA: Vision-Language Model with Plug-In Diffusion Expert for General Robot Control, Midea Group & East China Normal University, arxiv, Feb 9 2025. [Paper] [Website] [Code]
- UP-VLA: A Unified Understanding and Prediction Model for Embodied Agent, Tsinghua University & Shanghai Qi Zhi Institute, arxiv, Feb 3 2025. [Paper]
- iRe-VLA: Improving Vision-Language-Action Model with Online Reinforcement Learning, Tsinghua University & Shanghai Qi Zhi Institute, arxiv, Jan 28 2025. [Paper]
- FAST: FAST: Efficient Action Tokenization for Vision-Language-Action Models, Physical Intelligence & UC Berkeley & Stanford, arxiv, Jan 16 2025. [Website] [Paper] [Tokenizer] [Code]
-
$π_0$ :$π_0$ : A Vision-Language-Action Flow Model for General Robot Control, Physical Intelligence, arxiv, Oct 31 2024. [Website] [Paper] [Code] - DeeR-VLA: DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution, Tsinghua University, NeurIPS 24. [Paper] [Website] [Code]
- HybridVLA: HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model, Peking University, Mar 13 2025. [Paper] [Website] [Code] *
- GR00T N1: GR00T N1: An Open Foundation Model for Generalist Humanoid Robots, NVIDIA, Mar 27 2025. [Paper] [Website] [Code] [Dataset]
- GO-1: AgiBot World Colosseo: Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems, AgiBot-World (Shanghai AI Lab & AgiBot Inc.), AgiBot World, Mar 10 2025. [Paper] [Website] [Code] [Model]
- Humanoid-VLA: Humanoid-VLA: Towards Universal Humanoid Control with Visual Integration, Westlake University & Zhejiang University, arxiv, Feb 21 2025. [Paper]
- NAVILA: NAVILA: LEGGED ROBOT VISION-LANGUAGEACTION MODEL FOR NAVIGATION, UC San Diego, arxiv, Dec 5 2024. [Website] [Paper]
- Knowledge Insulating Vision-Language-Action Models: Train Fast, Run Fast, Generalize Better, Physical Intelligence, May 29 2025. [Paper] [Website]
- What Can RL Bring to VLA Generalization? An Empirical Study, Tsinghua, arxiv, May 26 2025. [Paper] [Website] [Code]
- VLA-RL: Towards Masterful and General Robotic Manipulation with Scalable Reinforcement Learning
- OFT: Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success, Stanford, arxiv, Apr 28 2025. [Paper] [Website] [Code] [Model]