Trusted-AI/adversarial-robustness-toolbox
adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Builder

Trusted-AI
Trusted-AI • individual
Stars
5,913
Using upstream star count
Forks
1,305
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Mar 15, 2018
Project creation date
README Summary
The Adversarial Robustness Toolbox (ART) is a comprehensive Python library designed to enhance machine learning security against adversarial attacks. It provides tools for both offensive (red team) and defensive (blue team) security research, covering evasion attacks, data poisoning, model extraction, and privacy inference attacks. The library supports multiple ML frameworks and offers standardized implementations of state-of-the-art adversarial techniques.
AI Dev Skills
Unmapped
Tags
Taxonomy
Deployment Context
Skill Areas
Recent Activity
Updated 4 months ago
7 Days
0
30 Days
0
90 Days
0
Quality
production- Quality
- high
- Maturity
- production
Categories
PM Skills
Languages
Timeline
- Project created
- Mar 15, 2018
- Forked
- Mar 21, 2026
- Your last push
- 4 months ago
- Upstream last push
- 4 months ago
- Tracked since
- Dec 12, 2025
Similar Repos
pgvector cosine similarity · $0
Loading…