Library/adversarial-robustness-toolbox
Library/adversarial-robustness-toolboxForked

Trusted-AI/adversarial-robustness-toolbox

adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Builder

Trusted-AI

Trusted-AI

Trusted-AI • individual

Stars

5,913

Using upstream star count

Forks

1,305

Using upstream fork count

Open Issues

0

Activity Score

0/100

0 commits in 30d

Created

Mar 15, 2018

Project creation date

README Summary

The Adversarial Robustness Toolbox (ART) is a comprehensive Python library designed to enhance machine learning security against adversarial attacks. It provides tools for both offensive (red team) and defensive (blue team) security research, covering evasion attacks, data poisoning, model extraction, and privacy inference attacks. The library supports multiple ML frameworks and offers standardized implementations of state-of-the-art adversarial techniques.

AI Dev Skills

Unmapped

Adversarial Machine LearningModel Security TestingAdversarial Attack GenerationAdversarial Defense MechanismsData Poisoning DetectionModel Extraction PreventionPrivacy-Preserving Machine LearningMembership Inference AttacksModel Inversion AttacksEvasion Attack DefenseRobustness EvaluationAI Red Team OperationsAI Blue Team Operations

Tags

Adversarial Machine LearningModel Security TestingAdversarial Attack GenerationAdversarial Defense MechanismsData Poisoning DetectionModel Extraction PreventionPrivacy-Preserving Machine LearningMembership Inference AttacksModel Inversion AttacksEvasion Attack DefenseRobustness EvaluationAI Red Team OperationsAI Blue Team OperationsRobust AI SystemsFinancial ServicesRobustness BenchmarkingAdversarial Training ImplementationPrivacy-Preserving AIResearch EnvironmentGovernmentAutonomous VehiclesAI SecurityTabularResponsible AIAudioModel Vulnerability AssessmentPrivacy Attack SimulationAI SafetyCloudSelf-hostedImageOn-premiseVideoTextSecurity Testing of ML ModelsHealthcareCybersecurityCompliance Testing for AI SystemsAI System Penetration TestingModel HardeningDefensePython

Taxonomy

Recent Activity

Updated 4 months ago

7 Days

0

30 Days

0

90 Days

0

Quality

production
Quality
high
Maturity
production

Categories

Dev Tools & AutomationPrimaryLearning ResourcesEvals & BenchmarkingInference & ServingSafety & AlignmentCoding & Dev ToolsHealthcare & BiologyFinance & LegalSearch & KnowledgeOther AI / MLAI AgentsModel TrainingGenerative MediaRobotics

PM Skills

Developer Platform

Languages

Python100.0%

Timeline

Project created
Mar 15, 2018
Forked
Mar 21, 2026
Your last push
4 months ago
Upstream last push
4 months ago
Tracked since
Dec 12, 2025

Similar Repos

pgvector cosine similarity · $0

Loading…