Library/modelscan
Library/modelscanForked

protectai/modelscan

modelscan

Protection against Model Serialization Attacks

Builder

protectai

protectai

protectai • individual

Stars

671

Using upstream star count

Forks

135

Using upstream fork count

Open Issues

0

Activity Score

0/100

0 commits in 30d

Created

Jul 25, 2023

Project creation date

README Summary

ModelScan is a security tool designed to protect against model serialization attacks by scanning machine learning models for malicious code before they are loaded. It provides both CLI and Python API interfaces to analyze popular ML model formats like pickle, joblib, and others for potential security threats. The tool helps developers and organizations safely use third-party ML models by detecting embedded malicious payloads.

AI Dev Skills

Unmapped

AI SecurityModel Serialization Vulnerability DetectionMachine Learning Model ScanningMalware Detection in AI ModelsAI Supply Chain SecurityModel Forensics

Tags

AI SecurityModel Serialization Vulnerability DetectionMachine Learning Model ScanningMalware Detection in AI ModelsAI Supply Chain SecurityModel ForensicsModel FilesTrustworthy AIModel Integrity VerificationOn-premiseHealthcareML Pipeline SecurityModel Security ScanningAI GovernanceCloudCI/CD PipelineAI Supply Chain ProtectionCybersecurityEnterprise AIFinancial ServicesCloud ServicesMalicious Model DetectionAI SafetyPre-deployment Model ValidationSelf-hostedPython

Taxonomy

Recent Activity

Updated 1 months ago

7 Days

0

30 Days

0

90 Days

0

Quality

beta
Quality
medium
Maturity
beta

Categories

ML Platform & InfrastructurePrimarySafety & AlignmentHealthcare & BiologyOther AI / MLMLOps & InfrastructureDev Tools & AutomationInference & Serving

PM Skills

Scale & ReliabilityDeveloper Platform

Languages

Python100.0%

Timeline

Project created
Jul 25, 2023
Forked
Mar 21, 2026
Your last push
1 months ago
Upstream last push
1 months ago
Tracked since
Feb 18, 2026

Similar Repos

pgvector cosine similarity · $0

Loading…