viddexa/autollm
autollm
Ship RAG based LLM web apps in seconds.
Builder

viddexa
viddexa • individual
Stars
1,005
Using upstream star count
Forks
99
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Sep 21, 2023
Project creation date
README Summary
AutoLLM is a Python framework that enables developers to quickly deploy Retrieval-Augmented Generation (RAG) based web applications powered by large language models. It provides pre-built components and streamlined workflows to ship LLM-powered applications in seconds rather than weeks.
AI Dev Skills
Unmapped
Retrieval-Augmented GenerationLarge Language Model IntegrationVector Database ManagementDocument Processing and ChunkingSemantic SearchWeb Application DevelopmentAPI Development
Tags
Retrieval-Augmented GenerationLarge Language Model IntegrationVector Database ManagementDocument Processing and ChunkingSemantic SearchWeb Application DevelopmentAPI Development
Taxonomy
Deployment Context
Industries
Modalities
Skill Areas
Recent Activity
Updated 2 years ago
7 Days
0
30 Days
0
90 Days
0
Quality
prototype- Quality
- medium
- Maturity
- prototype
Categories
Dev Tools & AutomationPrimaryFoundation ModelsRAG & RetrievalEvals & BenchmarkingSearch & Knowledge
PM Skills
Developer Platform
Languages
Python100.0%
Timeline
- Project created
- Sep 21, 2023
- Forked
- Mar 29, 2026
- Your last push
- 2 years ago
- Upstream last push
- 2 years ago
- Tracked since
- Jan 29, 2024
Similar Repos
pgvector cosine similarity · $0
Loading…