openai/simple-evals
simple-evals
Simple-evals is OpenAI's framework for evaluating language models through standardized benchmarks and tests.
Builder

OpenAI
openai • ai-lab
Stars
4,424
Using upstream star count
Forks
481
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Apr 11, 2024
Project creation date
README Summary
Simple-evals is OpenAI's framework for evaluating language models through standardized benchmarks and tests. It provides a collection of evaluation scripts and tools to assess model performance across various tasks including reasoning, knowledge, and safety measures. The framework is designed to be easily extensible and reproducible for consistent model evaluation.
AI Dev Skills
Unmapped
Tags
Taxonomy
AI Trends
Deployment Context
Modalities
Recent Activity
Updated 8 months ago
7 Days
0
30 Days
0
90 Days
0
Quality
prototype- Quality
- medium
- Maturity
- prototype
Categories
PM Skills
Languages
Timeline
- Project created
- Apr 11, 2024
- Forked
- Mar 14, 2026
- Your last push
- 8 months ago
- Upstream last push
- 8 months ago
- Tracked since
- Jul 31, 2025
Similar Repos
pgvector cosine similarity · $0
Loading…