Library/SpatialVLA
Library/SpatialVLAForked

SpatialVLA/SpatialVLA

SpatialVLA

🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.

Builder

SpatialVLA

SpatialVLA

SpatialVLA • individual

Stars

676

Using upstream star count

Forks

46

Using upstream fork count

Open Issues

0

Activity Score

0/100

0 commits in 30d

Created

Jan 29, 2025

Project creation date

README Summary

SpatialVLA is a spatial-enhanced vision-language-action model designed for robotic control, trained on 1.1 million real robot episodes. The model integrates spatial understanding with vision and language processing to enable more effective robot manipulation and navigation tasks. This work has been accepted at RSS 2025 and represents a significant advancement in multimodal robotics AI.

AI Dev Skills

Unmapped

roboticsvision-language-modelsspatial-reasoningrobot-manipulationmultimodal-ai

Tags

roboticsvision-language-modelsspatial-reasoningrobot-manipulationmultimodal-aipytorchcomputer-visionnatural-language-processingreinforcement-learningembodied-ai

Taxonomy

Recent Activity

Updated 9 months ago

7 Days

0

30 Days

0

90 Days

0

Quality

research
Quality
high
Maturity
research

Categories

Dev Tools & AutomationPrimaryFoundation ModelsModel TrainingRoboticsComputer VisionMultimodal AIOther AI / ML

PM Skills

Developer Platform

Languages

Python100.0%

Timeline

Project created
Jan 29, 2025
Forked
Mar 31, 2026
Your last push
9 months ago
Upstream last push
9 months ago
Tracked since
Jun 23, 2025

Similar Repos

pgvector cosine similarity · $0

Loading…