Trusted-AI/AIF360
AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Builder

Trusted-AI
Trusted-AI • individual
Stars
2,784
Using upstream star count
Forks
907
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Aug 22, 2018
Project creation date
README Summary
AIF360 is a comprehensive Python toolkit developed by IBM's Trusted AI team that provides fairness metrics for evaluating bias in datasets and machine learning models. The library includes explanations for various fairness metrics and implements algorithms to detect and mitigate bias across different stages of the ML pipeline. It supports multiple fairness definitions and provides both individual and group fairness measures with practical mitigation strategies.
AI Dev Skills
Unmapped
Tags
Taxonomy
Deployment Context
Modalities
Skill Areas
Recent Activity
Updated 5 months ago
7 Days
0
30 Days
0
90 Days
0
Quality
production- Quality
- high
- Maturity
- production
Categories
PM Skills
Languages
Timeline
- Project created
- Aug 22, 2018
- Forked
- Mar 21, 2026
- Your last push
- 5 months ago
- Upstream last push
- 5 months ago
- Tracked since
- Nov 13, 2025
Similar Repos
pgvector cosine similarity · $0
Loading…