OpenVLA: An Open-Source Vision-Language-Action Model

Large policies pretrained on a combination of Internet-scale vision-languagedata and diverse robot demonstrations have the potential to change how we teachrobots new skills: rather than training new behaviors from scratch, we canfine-tune such vision-language-action (VLA) models to obtain robust,generalizable policies for visuomotor control. Yet, widespread adoption of VLAsfor robotics has been challenging as 1) existing VLAs are largely closed andinaccessible to the public, and 2) prior work fails to explore methods forefficiently fine-tuning VLAs for new tasks, a key component for adoption.Addressing these challenges, we introduce OpenVLA, a 7B-parameter open-sourceVLA trained on a diverse collection of 970k real-world robot demonstrations.OpenVLA builds on a Llama 2 language model combined with a visual encoder thatfuses pretrained features from DINOv2 and SigLIP. As a product of the addeddata diversity and new model components, OpenVLA demonstrates strong resultsfor generalist manipulation, outperforming closed models such as RT-2-X (55B)by 16.5% in absolute task success rate across 29 tasks and multiple robotembodiments, with 7x fewer parameters. We further show that we can effectivelyfine-tune OpenVLA for new settings, with especially strong generalizationresults in multi-task environments involving multiple objects and stronglanguage grounding abilities, and outperform expressive from-scratch imitationlearning methods such as Diffusion Policy by 20.4%. We also explore computeefficiency; as a separate contribution, we show that OpenVLA can be fine-tunedon consumer GPUs via modern low-rank adaptation methods and served efficientlyvia quantization without a hit to downstream success rate. Finally, we releasemodel checkpoints, fine-tuning notebooks, and our PyTorch codebase withbuilt-in support for training VLAs at scale on Open X-Embodiment datasets.