protectai/rebuff
rebuff
LLM Prompt Injection Detector
Builder

protectai
protectai • individual
Stars
1,456
Using upstream star count
Forks
132
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Apr 24, 2023
Project creation date
README Summary
Rebuff is a self-hardening prompt injection detector designed to protect AI applications from malicious prompt injection attacks. It provides a multi-layered defense system that learns from attacks to improve its detection capabilities over time. The tool offers both API and SDK integration options for easy implementation into existing LLM workflows.
AI Dev Skills
Unmapped
Prompt Injection DetectionLLM SecurityVector Database ImplementationNatural Language ProcessingMachine Learning ClassificationAI Safety and AlignmentAdversarial ML Defense
Tags
Prompt Injection DetectionLLM SecurityVector Database ImplementationNatural Language ProcessingMachine Learning ClassificationAI Safety and AlignmentAdversarial ML DefenseAI System ProtectionDeveloper ToolsProduction LLM GuardrailsLegal TechContent FilteringAI Safety EngineeringLLM Application SecurityResponsible AICloud APIOn-premiseFinTechText AnalysisPrompt Injection PreventionMalicious Input DetectionAI GovernanceEnterprise SoftwareTextAI SafetySecurity Pattern RecognitionSelf-hostedHealthcareTypeScript
Taxonomy
Deployment Context
Modalities
Skill Areas
Recent Activity
Updated 1 years ago
7 Days
0
30 Days
0
90 Days
0
Quality
beta- Quality
- medium
- Maturity
- beta
Categories
Dev Tools & AutomationPrimaryIndustry: FinTechNLP & TextSafety & AlignmentHealthcare & BiologyFinance & LegalOther AI / MLFoundation ModelsRAG & RetrievalEvals & Benchmarking
PM Skills
Developer Platform
Languages
TypeScript100.0%
Timeline
- Project created
- Apr 24, 2023
- Forked
- Mar 13, 2026
- Your last push
- 1 years ago
- Upstream last push
- 1 years ago
- Tracked since
- Aug 7, 2024
Similar Repos
pgvector cosine similarity · $0
Loading…