Library/LoRAForked

microsoft/LoRA

LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

Builder

Microsoft

Microsoft

microsoft • big-tech

Stars

13,385

Using upstream star count

Forks

894

Using upstream fork count

Open Issues

0

Activity Score

0/100

0 commits in 30d

Created

Jun 18, 2021

Project creation date

README Summary

LoRAlib is Microsoft's implementation of Low-Rank Adaptation (LoRA), a technique for efficiently fine-tuning large language models by learning low-rank updates to weight matrices. The library provides PyTorch modules that can replace standard neural network layers to enable parameter-efficient training with significantly reduced memory requirements. LoRA achieves comparable performance to full fine-tuning while using orders of magnitude fewer trainable parameters.

AI Dev Skills

Unmapped

LoRA Fine-tuningParameter-Efficient Fine-tuningLow-Rank Matrix AdaptationLarge Language Model OptimizationNeural Network Weight DecompositionTransfer Learning

Tags

LoRA Fine-tuningParameter-Efficient Fine-tuningLow-Rank Matrix AdaptationLarge Language Model OptimizationNeural Network Weight DecompositionTransfer LearningCloud APIOn-premiseTextCustom Language Model Fine-tuningModel CompressionMulti-task Model TrainingSelf-hostedLarge Language ModelsResource-Constrained Model CustomizationDomain-Specific Model AdaptationPython

Taxonomy

Recent Activity

Updated 1 years ago

7 Days

0

30 Days

0

90 Days

0

Quality

research
Quality
medium
Maturity
research

Categories

Foundation ModelsPrimaryInference & ServingOther AI / MLModel Training

PM Skills

Scale & Reliability

Languages

Python100.0%

Timeline

Project created
Jun 18, 2021
Forked
Mar 13, 2026
Your last push
1 years ago
Upstream last push
1 years ago
Tracked since
Dec 17, 2024

Similar Repos

pgvector cosine similarity · $0

Loading…