openai/CLIP
CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Builder

OpenAI
openai • ai-lab
Stars
32,995
Using upstream star count
Forks
3,971
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Dec 16, 2020
Project creation date
README Summary
CLIP (Contrastive Language-Image Pretraining) is a neural network that efficiently learns visual concepts from natural language supervision. It can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the zero-shot capabilities of GPT-2 and GPT-3. The model connects text and images in a single embedding space, enabling it to predict the most relevant text description for any given image.
AI Dev Skills
Unmapped
Tags
Taxonomy
AI Trends
Deployment Context
Industries
Modalities
Skill Areas
Recent Activity
Updated 1 months ago
7 Days
0
30 Days
0
90 Days
0
Quality
production- Quality
- high
- Maturity
- production
Categories
PM Skills
Languages
Timeline
- Project created
- Dec 16, 2020
- Forked
- Mar 13, 2026
- Your last push
- 1 months ago
- Upstream last push
- 19 days ago
- Tracked since
- Feb 18, 2026
Similar Repos
pgvector cosine similarity · $0
Loading…