protectai/modelscan
modelscan
Protection against Model Serialization Attacks
Builder

protectai
protectai • individual
Stars
671
Using upstream star count
Forks
135
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Jul 25, 2023
Project creation date
README Summary
ModelScan is a security tool designed to protect against model serialization attacks by scanning machine learning models for malicious code before they are loaded. It provides both CLI and Python API interfaces to analyze popular ML model formats like pickle, joblib, and others for potential security threats. The tool helps developers and organizations safely use third-party ML models by detecting embedded malicious payloads.
AI Dev Skills
Unmapped
Tags
Taxonomy
Deployment Context
Modalities
Skill Areas
Recent Activity
Updated 1 months ago
7 Days
0
30 Days
0
90 Days
0
Quality
beta- Quality
- medium
- Maturity
- beta
Categories
PM Skills
Languages
Timeline
- Project created
- Jul 25, 2023
- Forked
- Mar 21, 2026
- Your last push
- 1 months ago
- Upstream last push
- 1 months ago
- Tracked since
- Feb 18, 2026
Similar Repos
pgvector cosine similarity · $0
Loading…