Library/finetune-transformer-lm
Library/finetune-transformer-lmForked

openai/finetune-transformer-lm

finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"

Builder

OpenAI

OpenAI

openai • ai-lab

Stars

2,283

Using upstream star count

Forks

513

Using upstream fork count

Open Issues

0

Activity Score

0/100

0 commits in 30d

Created

Jun 11, 2018

Project creation date

README Summary

This repository contains the code and pre-trained model for OpenAI's GPT (Generative Pre-trained Transformer) from the paper 'Improving Language Understanding by Generative Pre-Training'. It implements a transformer-based language model that uses unsupervised pre-training on a large corpus followed by supervised fine-tuning for specific downstream tasks. The approach demonstrates significant improvements on various natural language understanding benchmarks.

AI Dev Skills

Unmapped

Transformer ArchitectureLanguage Model Pre-trainingUnsupervised LearningTransfer LearningFine-tuningNatural Language UnderstandingGenerative Pre-trainingSelf-Supervised Learning

Tags

Transformer ArchitectureLanguage Model Pre-trainingUnsupervised LearningTransfer LearningFine-tuningNatural Language UnderstandingGenerative Pre-trainingSelf-Supervised LearningNatural Language Understanding TasksSelf-hostedLanguage Model Fine-tuningTransfer Learning for NLPTextFoundation ModelsText ClassificationPython

Taxonomy

Recent Activity

Updated 7 years ago

7 Days

0

30 Days

0

90 Days

0

Quality

research
Quality
medium
Maturity
research

Categories

Foundation ModelsPrimaryModel TrainingGenerative MediaNLP & TextOther AI / ML

PM Skills

Scale & Reliability

Languages

Python100.0%

Timeline

Project created
Jun 11, 2018
Forked
Mar 14, 2026
Your last push
7 years ago
Upstream last push
7 years ago
Tracked since
Jan 25, 2019

Similar Repos

pgvector cosine similarity · $0

Loading…