Library/gpt-3Forked

openai/gpt-3

gpt-3

GPT-3: Language Models are Few-Shot Learners

Builder

OpenAI

OpenAI

openai • ai-lab

Stars

15,750

Using upstream star count

Forks

2,265

Using upstream fork count

Open Issues

0

Activity Score

0/100

0 commits in 30d

Created

May 18, 2020

Project creation date

README Summary

GPT-3 is a large-scale autoregressive language model developed by OpenAI that demonstrates strong few-shot learning capabilities across diverse NLP tasks. The model can perform various language tasks with minimal task-specific training examples, showing emergent abilities in text generation, translation, and reasoning.

AI Dev Skills

Unmapped

Large Language ModelsFew-Shot LearningTransformer ArchitectureNatural Language ProcessingDeep LearningLanguage Model Pre-trainingIn-Context LearningAutoregressive Language Modeling

Tags

Large Language ModelsFew-Shot LearningTransformer ArchitectureNatural Language ProcessingDeep LearningLanguage Model Pre-trainingIn-Context LearningAutoregressive Language ModelingFoundation ModelsLanguage Model EvaluationNatural Language UnderstandingLanguage Model ResearchFew-Shot Text GenerationText CompletionTextCloud API

Taxonomy

Recent Activity

Updated 5 years ago

7 Days

0

30 Days

0

90 Days

0

Quality

research
Quality
medium
Maturity
research

Categories

Learning ResourcesPrimaryEvals & BenchmarkingNLP & TextSearch & KnowledgeOther AI / MLFoundation ModelsModel Training

PM Skills

Product Discovery

Languages

No language breakdown recorded.

Timeline

Project created
May 18, 2020
Forked
Mar 14, 2026
Your last push
5 years ago
Upstream last push
5 years ago
Tracked since
Sep 18, 2020

Similar Repos

pgvector cosine similarity · $0

Loading…