LMCache/LMCache
LMCache
Supercharge Your LLM with the Fastest KV Cache Layer
Builder

LMCache
LMCache • individual
Stars
7,864
Using upstream star count
Forks
1,063
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
May 28, 2024
Project creation date
README Summary
LMCache is a high-performance key-value cache layer designed to accelerate Large Language Model (LLM) inference by caching and reusing computed key-value pairs. It provides a fast, distributed caching system that can significantly reduce computation overhead and improve response times for LLM applications. The system is built in Python and offers easy integration with existing LLM workflows.
AI Dev Skills
Unmapped
Tags
Taxonomy
Deployment Context
Industries
Modalities
Skill Areas
Recent Activity
Updated 5 months ago
7 Days
0
30 Days
0
90 Days
0
Quality
prototype- Quality
- medium
- Maturity
- prototype
Categories
PM Skills
Languages
Timeline
- Project created
- May 28, 2024
- Forked
- Nov 2, 2025
- Your last push
- 5 months ago
- Upstream last push
- 6 days ago
- Tracked since
- Nov 2, 2025
Similar Repos
pgvector cosine similarity · $0
Loading…