NVIDIA-NeMo/Guardrails
Guardrails
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Builder

NVIDIA-NeMo
NVIDIA-NeMo • individual
Stars
5,854
Using upstream star count
Forks
635
Using upstream fork count
Open Issues
0
Activity Score
0/100
0 commits in 30d
Created
Apr 18, 2023
Project creation date
README Summary
NeMo Guardrails is an open-source toolkit developed by NVIDIA for adding programmable safety guardrails to large language model (LLM) based conversational systems. The toolkit provides developers with tools to implement content filtering, response validation, and behavioral constraints to ensure AI conversations remain safe, appropriate, and aligned with intended use cases. It offers a flexible framework for defining custom rules and policies that can be applied to various LLM applications.
AI Dev Skills
Unmapped
Tags
Taxonomy
Deployment Context
Modalities
Skill Areas
Recent Activity
Updated 1 months ago
7 Days
0
30 Days
0
90 Days
0
Quality
beta- Quality
- high
- Maturity
- beta
Categories
PM Skills
Languages
Timeline
- Project created
- Apr 18, 2023
- Forked
- Mar 13, 2026
- Your last push
- 1 months ago
- Upstream last push
- 6 days ago
- Tracked since
- Mar 13, 2026
Similar Repos
pgvector cosine similarity · $0
Loading…