openai/finetune-transformer-lm
finetune-transformer-lm
Code and model for the paper "Improving Language Understanding by Generative Pre-Training"
Builder

OpenAI
openai • ai-lab
Stars
2,283
Using upstream star count
Forks
513
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Jun 11, 2018
Project creation date
README Summary
This repository contains the code and pre-trained model for OpenAI's GPT (Generative Pre-trained Transformer) from the paper 'Improving Language Understanding by Generative Pre-Training'. It implements a transformer-based language model that uses unsupervised pre-training on a large corpus followed by supervised fine-tuning for specific downstream tasks. The approach demonstrates significant improvements on various natural language understanding benchmarks.
AI Dev Skills
Unmapped
Tags
Taxonomy
Deployment Context
Modalities
Skill Areas
Recent Activity
Updated 7 years ago
7 Days
0
30 Days
0
90 Days
0
Quality
research- Quality
- medium
- Maturity
- research
Categories
PM Skills
Languages
Timeline
- Project created
- Jun 11, 2018
- Forked
- Mar 14, 2026
- Your last push
- 7 years ago
- Upstream last push
- 7 years ago
- Tracked since
- Jan 25, 2019
Similar Repos
pgvector cosine similarity · $0
Loading…