OpenDriveLab/UniVLA
UniVLA
[RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions
Builder

OpenDriveLab
OpenDriveLab • individual
Stars
1,036
Using upstream star count
Forks
58
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Apr 23, 2025
Project creation date
README Summary
UniVLA is a vision-language-action model that learns task-centric latent actions to enable robotic agents to act in diverse environments. The model uses a unified framework that can generalize across different tasks and domains by learning compressed action representations in a latent space.
AI Dev Skills
Unmapped
roboticscomputer-visionnatural-language-processingreinforcement-learningmulti-modal-learning
Tags
roboticscomputer-visionnatural-language-processingreinforcement-learningmulti-modal-learningaction-predictionembodied-aipytorchtransformer
Taxonomy
Recent Activity
Updated 4 months ago
7 Days
0
30 Days
0
90 Days
0
Quality
research- Quality
- high
- Maturity
- research
Categories
Foundation ModelsPrimaryDev Tools & AutomationModel TrainingComputer VisionRoboticsOther AI / ML
PM Skills
Developer Platform
Languages
Python100.0%
Timeline
- Project created
- Apr 23, 2025
- Forked
- Mar 31, 2026
- Your last push
- 4 months ago
- Upstream last push
- 4 months ago
- Tracked since
- Nov 19, 2025
Similar Repos
pgvector cosine similarity · $0
Loading…