vllm-project/vllm-omni
vllm-omni
A framework for efficient model inference with omni-modality models
Builder

vLLM
vllm-project • startup
Stars
4,132
Using upstream star count
Forks
676
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Sep 11, 2025
Project creation date
README Summary
vLLM-Omni is a framework designed for efficient inference of omni-modal models that can process multiple types of input including text, images, audio, and video. It extends the vLLM inference engine to support multi-modal large language models with optimized performance and memory usage. The framework provides a unified interface for running various omni-modal models with high throughput and low latency.
AI Dev Skills
Unmapped
Tags
Taxonomy
AI Trends
Deployment Context
Modalities
Skill Areas
Recent Activity
Updated 22 days ago
7 Days
0
30 Days
0
90 Days
0
Quality
prototype- Quality
- medium
- Maturity
- prototype
Categories
PM Skills
Languages
Timeline
- Project created
- Sep 11, 2025
- Forked
- Mar 22, 2026
- Your last push
- 22 days ago
- Upstream last push
- 6 days ago
- Tracked since
- Mar 22, 2026
Similar Repos
pgvector cosine similarity · $0
Loading…