Library/limeForked

marcotcr/lime

lime

Lime: Explaining the predictions of any machine learning classifier

Builder

marcotcr

marcotcr

marcotcr • individual

Stars

12,110

Using upstream star count

Forks

1,857

Using upstream fork count

Open Issues

0

Activity Score

0/100

0 commits in 30d

Created

Mar 15, 2016

Project creation date

README Summary

LIME (Local Interpretable Model-agnostic Explanations) is a technique that explains the predictions of any machine learning classifier in an interpretable and faithful manner by learning an interpretable model locally around the prediction. It provides explanations for individual predictions by perturbing the input and seeing how the predictions change, making black-box models more transparent and trustworthy.

AI Dev Skills

Unmapped

Machine Learning ExplainabilityModel InterpretabilityLocal Surrogate ModelingFeature Importance AnalysisModel-Agnostic Explanation MethodsPermutation-based Feature Attribution

Tags

Machine Learning ExplainabilityModel InterpretabilityLocal Surrogate ModelingFeature Importance AnalysisModel-Agnostic Explanation MethodsPermutation-based Feature AttributionBuilding Trust in ML PredictionsResponsible AIAI TransparencySelf-hostedClinical Decision Support ExplanationBrowser/WASMAI SafetyModel Bias DetectionRegulatory Compliance ReportingImageTextExplainable AIModel Debugging and ValidationTabularJavaScript

Taxonomy

Recent Activity

Updated 1 years ago

7 Days

0

30 Days

0

90 Days

0

Quality

prototype
Quality
medium
Maturity
prototype

Categories

Observability & MonitoringPrimarySafety & AlignmentHealthcare & BiologyFinance & LegalEdge & Mobile AIOther AI / ML

PM Skills

Developer Platform

Languages

JavaScript100.0%

Timeline

Project created
Mar 15, 2016
Forked
Mar 22, 2026
Your last push
1 years ago
Upstream last push
1 years ago
Tracked since
Jul 25, 2024

Similar Repos

pgvector cosine similarity · $0

Loading…