Skip to content
@OpenHelix-Team

OpenHelix Robotics

OpenHelix Robotics: Building Next-generation Embodiment Intelligence

We are a group focused on vision-language-action models (VLAs). We wish to bring insights to the community with our research.

GitHub User's stars Followers

Introduction

OpenHelix-Team introduces a novel family of fully open-source Vision-Language-Action Models (VLAs) that achieves state-of-the-art performance with substantially lower cost.

Visual Feature Alignment for VLAs

  • ReconVLA (AAAI 2026 Best Paper Award): Reconstructive Vision-Language-Action Model as Effective Robot Perceiver
  • Spatial Forcing (ICLR 2026): Implicit Spatial Representation Alignment for Vision-Language-Action Model

Humanoid VLAs

World-modeling VLAs

  • Unified Diffusion VLA (ICLR 2026): The first open-sourced diffusion Vision-Language-Action model
  • HiF-VLA: An efficient, bidirectional spatiotemporal expansion Vision-Language-Action Model
  • frappe: Infusing World Modeling into Generalist Policies via Multiple Future Representation Alignment
  • VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning

General Foundation Models

  • VLA-Adapter (AAAI 2026 (Oral)): An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
  • LLaVA-VLA (ICRA 2026): A Simple Yet Powerful Vision-Language-Action Model

Efficient VLAs

  • CEED-VLA: Consistency Vision-Language-Action Model with Early-Exit Decoding
  • OpenHelix: An Open-Source Dual-System Vision-Language-Action Model for Robotic Manipulation

Visual Enhanced Frameworks

  • VLA-2: Empowering Vision-Language-Action Models with an Agentic Framework for Unseen Concept Manipulation
  • LongVLA (CoRL 2025): Unleashing Long-Horizon Capability of Vision-Language-Action Models for Robot Manipulation

Awesome VLAs

Collaborating Institutions

This initiative is jointly established and co-developed with the following research institutions:

  • Westlake University
  • The Hong Kong University of Science and Technology (Guangzhou)
  • Zhejiang University
  • Tsinghua University
  • Beijing Academy of Artificial Intelligence (BAAI)
  • Xi’an Jiaotong University
  • Beijing University of Posts and Telecommunications

Contact

If you are interested in discussion or joining us, please send emails to songwenxuan0115@gmail.com.

Pinned Loading

  1. Awesome-Force-Tactile-VLA Awesome-Force-Tactile-VLA Public

    A paper list of multimodal VLAs

    37 2

Repositories

Showing 10 of 15 repositories
  • frappe Public

    Official implementation of FRAPPE: Infusing World Modeling into Generalist Policies via Multiple Future Representation Alignment

    OpenHelix-Team/frappe’s past year of commit activity
    Python 27 0 0 0 Updated Feb 24, 2026
  • .github Public
    OpenHelix-Team/.github’s past year of commit activity
    0 0 0 0 Updated Feb 20, 2026
  • OpenTrajBooster Public

    Official implementation of TrajBooster

    OpenHelix-Team/OpenTrajBooster’s past year of commit activity
    Jupyter Notebook 172 18 1 0 Updated Feb 17, 2026
  • ReconVLA Public

    Official implementation of ReconVLA: Reconstructive Vision-Language-Action Model as Effective Robot Perceiver.

    OpenHelix-Team/ReconVLA’s past year of commit activity
    Python 214 MIT 14 11 0 Updated Jan 25, 2026
  • Spatial-Forcing Public

    Official implementation of Spatial-Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model

    OpenHelix-Team/Spatial-Forcing’s past year of commit activity
    Python 184 MIT 10 1 0 Updated Jan 8, 2026
  • Unified-Diffusion-VLA Public

    🔥 The first open-sourced diffusion vision-langauge-action model.

    OpenHelix-Team/Unified-Diffusion-VLA’s past year of commit activity
    Python 162 MIT 6 1 1 Updated Jan 8, 2026
  • HiF-VLA Public

    HiF-VLA: An efficient, bidirectional spatiotemporal expansion Vision-Language-Action Model

    OpenHelix-Team/HiF-VLA’s past year of commit activity
    Python 47 MIT 1 1 0 Updated Dec 11, 2025
  • VLA-Adapter Public

    VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model

    OpenHelix-Team/VLA-Adapter’s past year of commit activity
    Python 1,998 MIT 180 25 7 Updated Nov 18, 2025
  • VLA-2 Public

    VLA^2: Empowering Vision-Language-Action Models with an Agentic Framework for Unseen Concept Manipulation

    OpenHelix-Team/VLA-2’s past year of commit activity
    Python 20 Apache-2.0 0 2 0 Updated Nov 3, 2025
  • LLaVA-VLA Public

    LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]

    OpenHelix-Team/LLaVA-VLA’s past year of commit activity
    Python 176 MIT 4 2 0 Updated Oct 29, 2025

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…