PathologyFoundation/plip
plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
Builder

PathologyFoundation
PathologyFoundation • individual
Stars
374
Using upstream star count
Forks
38
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Feb 15, 2023
Project creation date
README Summary
PLIP (Pathology Language and Image Pre-Training) is the first vision and language foundation model specifically designed for Pathology AI, published in Nature Medicine. It is a fine-tuned version of the original CLIP model that can extract visual and language features from pathology images and text descriptions. The model serves as a large-scale pre-trained foundation for various pathology AI applications.
AI Dev Skills
Unmapped
Tags
Taxonomy
Deployment Context
Modalities
Skill Areas
Recent Activity
Updated 2 years ago
7 Days
0
30 Days
0
90 Days
0
Quality
research- Quality
- high
- Maturity
- research
Categories
PM Skills
Languages
Timeline
- Project created
- Feb 15, 2023
- Forked
- Mar 23, 2026
- Your last push
- 2 years ago
- Upstream last push
- 2 years ago
- Tracked since
- Sep 20, 2023
Similar Repos
pgvector cosine similarity · $0
Loading…