Library/stanford-cme-295-transformers-large-language-models
Library/stanford-cme-295-transformers-large-language-modelsForked

afshinea/stanford-cme-295-transformers-large-language-models

stanford-cme-295-transformers-large-language-models

VIP cheatsheet for Stanford's CME 295 Transformers and Large Language Models

Builder

afshinea

afshinea

afshinea • individual

Stars

4,167

Using upstream star count

Forks

586

Using upstream fork count

Open Issues

0

Activity Score

0/100

0 commits in 30d

Created

Mar 23, 2025

Project creation date

README Summary

This repository contains a VIP (Very Important Points) cheatsheet for Stanford's CME 295 course on Transformers and Large Language Models. It serves as a comprehensive reference guide covering key concepts, architectures, and techniques related to transformer models and LLMs. The cheatsheet is designed to help students quickly review and understand the most important material from the course.

AI Dev Skills

Unmapped

Transformer ArchitectureLarge Language Model TheoryDeep Learning FundamentalsNatural Language ProcessingAttention MechanismsNeural Network Architecture Design

Tags

Transformer ArchitectureLarge Language Model TheoryDeep Learning FundamentalsNatural Language ProcessingAttention MechanismsNeural Network Architecture DesignTransformer Architecture LearningLarge Language ModelsFoundation ModelsAcademic Reference MaterialLLM Concept ReviewTextCourse Study GuideEducation

Taxonomy

Recent Activity

Updated 8 months ago

7 Days

0

30 Days

0

90 Days

0

Quality

research
Quality
medium
Maturity
research

Categories

Foundation ModelsPrimaryCoding & Dev ToolsOther AI / MLLearning ResourcesNLP & Text

PM Skills

Product Discovery

Languages

No language breakdown recorded.

Timeline

Project created
Mar 23, 2025
Forked
Nov 16, 2025
Your last push
8 months ago
Upstream last push
8 months ago
Tracked since
Jul 27, 2025

Similar Repos

pgvector cosine similarity · $0

Loading…