Library/llamafile
Library/llamafileForked

mozilla-ai/llamafile

llamafile

Distribute and run LLMs with a single file.

Builder

mozilla-ai

mozilla-ai

mozilla-ai • individual

Stars

23,956

Using upstream star count

Forks

1,292

Using upstream fork count

Open Issues

0

Activity Score

0/100

0 commits in 30d

Created

Sep 10, 2023

Project creation date

README Summary

Llamafile is a framework that packages large language models (LLMs) into single executable files that can run on multiple operating systems without dependencies. It combines llama.cpp with Cosmopolitan Libc to create portable executables that include both the model weights and inference engine. Users can download and run LLMs locally with a simple command, making AI model distribution and deployment significantly easier.

AI Dev Skills

Unmapped

Large Language Model DeploymentCross-platform Binary DistributionModel QuantizationLocal AI InferenceSystem-level ProgrammingModel Packaging and Distribution

Tags

Large Language Model DeploymentCross-platform Binary DistributionModel QuantizationLocal AI InferenceSystem-level ProgrammingModel Packaging and DistributionOn-premisePrototype DevelopmentLocal LLM DeploymentEdge/MobileAI DemocratizationDeveloper ToolsOffline AI ApplicationsPrivacy-focused AI SolutionsEducational AI DemonstrationsSelf-hostedEnterprise SoftwareEducationEdge AITextOn-device AIC

Taxonomy

Recent Activity

Updated 1 months ago

7 Days

0

30 Days

0

90 Days

0

Quality

beta
Quality
high
Maturity
beta

Categories

Foundation ModelsPrimaryInference & ServingDev Tools & AutomationML Platform & InfrastructureEdge & Mobile AIOther AI / MLRobotics

PM Skills

Developer Platform

Languages

C100.0%

Timeline

Project created
Sep 10, 2023
Forked
Mar 13, 2026
Your last push
1 months ago
Upstream last push
7 days ago
Tracked since
Mar 12, 2026

Similar Repos

pgvector cosine similarity · $0

Loading…