🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Anchors

A method that identifies simple and sufficient decision rules (anchors) that explain a model's prediction for a given instance with high fidelity.

📖
terms

Shapley Value

A concept from game theory that measures the average marginal contribution of a player (feature) across all possible coalitions, serving as the foundation for SHAP.

📖
terms

Input Perturbation

The process of creating slight variations in the input data to observe the effect on the model's prediction, used by methods like LIME to build a local neighborhood.

📖
terms

Fidelity

A metric evaluating how faithfully a local explanation (like LIME's simple model) mimics the behavior of the complex model in its neighborhood.

📖
terms

Model-Agnostic Explanation

An interpretability approach that treats the predictive model as a black box, interacting only with its inputs and outputs to generate explanations.

📖
terms

Saliency Map

A visualization that highlights the pixels or features of an input that most influenced a model's prediction, often obtained by computing the gradient.

📖
terms

Kernel Neighborhood

In LIME, a function that defines the proximity between the original instance and the perturbed instances, weighting their influence in the local explanation model.

📖
terms

Explanation Rule

A simple logical condition (e.g., IF feature_A > X AND feature_B < Y) that captures the primary reason for a specific prediction, typical of methods like Anchors.

📖
terms

Post-hoc Interpretability

The analysis of a model after its training to understand its decisions, as opposed to intrinsically interpretable models.

📖
terms

SHAP Kernel Explainer

A SHAP implementation using kernel weighting to estimate Shapley values, making it model-agnostic but potentially slower.

📖
terms

SHAP Tree Explainer

An optimized SHAP algorithm that calculates exact Shapley values for tree-based models (like XGBoost, LightGBM) very efficiently.

📖
terms

Local Surrogate Explanation

The fundamental principle of LIME, consisting of training a simple and interpretable model (surrogate) to approximate the behavior of the complex model locally.

🔍

No results found