🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Anchors

Local explanation approach that identifies high-precision IF-THEN rules (anchors) that anchor the prediction locally, ensuring that modifications to uncovered features do not affect the prediction.

📖
terms

Counterfactual Explanations

Method generating minimal hypothetical scenarios that show how to change input features to obtain a different prediction, thus explaining the model's decision boundaries.

📖
terms

Local Surrogate Models

Simplified models trained to approximate the behavior of a complex model only in the neighborhood of a specific prediction, providing locally interpretable explanations.

📖
terms

LRP (Layer-wise Relevance Propagation)

Backpropagation technique that redistributes the final prediction of the neural network through its layers down to the input features, quantifying their individual contribution.

📖
terms

DeepLIFT

Relevance attribution method for deep neural networks that compares each neuron's activation to its reference state, calculating contributions by difference rather than gradients.

📖
terms

Integrated Gradients

Attribution technique that integrates gradients along a path from a baseline reference to the current input, ensuring axiomatic properties like sensitivity and implementation invariance.

📖
terms

Occlusion Sensitivity

Local explanation approach that systematically masks regions of the input and observes the impact on prediction, identifying critical areas for the model's decision.

📖
terms

Influence Functions

Analytical technique estimating how model parameters and its predictions would change if a specific training point were modified, identifying influential data for a prediction.

📖
terms

ICE (Individual Conditional Expectation)

Method visualizing how a model's prediction changes for an individual observation when the value of a feature varies, revealing heterogeneous effects and interactions.

📖
terms

Kernel SHAP

SHAP variant using a kernel approach to estimate Shapley values without enumerating all coalitions, applicable to any machine learning model.

📖
terms

Tree SHAP

Optimized implementation of SHAP specifically designed for tree-based models, calculating exact Shapley values in polynomial time thanks to the tree structure.

📖
terms

Local Fidelity

Measure evaluating the accuracy with which a local explanation model reproduces the predictions of the original model in the neighborhood of a specific instance, crucial for trust in the explanation.

📖
terms

Feature Perturbation

Local analysis technique systematically modifying input features to observe prediction changes, identifying the most sensitive features for a given decision.

📖
terms

Local Feature Attribution

Process quantifying the contribution of each feature to a specific prediction, different from global importance which considers the entire dataset.

📖
terms

Decision Boundary Visualization

Graphical method showing how the model separates classes around a specific prediction, helping to understand local decision mechanisms and prediction robustness.

📖
terms

Local Linear Approximation

Technique locally approximating a non-linear model with a simple linear model around a prediction point, facilitating interpretation of complex decisions.

📖
terms

Local Permutation Importance

Variant of permutation importance that evaluates the impact of randomizing each feature on a specific prediction rather than on the entire dataset.

🔍

No results found