AI Glossary
The complete dictionary of Artificial Intelligence
Anchors
Local explanation approach that identifies high-precision IF-THEN rules (anchors) that anchor the prediction locally, ensuring that modifications to uncovered features do not affect the prediction.
Counterfactual Explanations
Method generating minimal hypothetical scenarios that show how to change input features to obtain a different prediction, thus explaining the model's decision boundaries.
Local Surrogate Models
Simplified models trained to approximate the behavior of a complex model only in the neighborhood of a specific prediction, providing locally interpretable explanations.
LRP (Layer-wise Relevance Propagation)
Backpropagation technique that redistributes the final prediction of the neural network through its layers down to the input features, quantifying their individual contribution.
DeepLIFT
Relevance attribution method for deep neural networks that compares each neuron's activation to its reference state, calculating contributions by difference rather than gradients.
Integrated Gradients
Attribution technique that integrates gradients along a path from a baseline reference to the current input, ensuring axiomatic properties like sensitivity and implementation invariance.
Occlusion Sensitivity
Local explanation approach that systematically masks regions of the input and observes the impact on prediction, identifying critical areas for the model's decision.
Influence Functions
Analytical technique estimating how model parameters and its predictions would change if a specific training point were modified, identifying influential data for a prediction.
ICE (Individual Conditional Expectation)
Method visualizing how a model's prediction changes for an individual observation when the value of a feature varies, revealing heterogeneous effects and interactions.
Kernel SHAP
SHAP variant using a kernel approach to estimate Shapley values without enumerating all coalitions, applicable to any machine learning model.
Tree SHAP
Optimized implementation of SHAP specifically designed for tree-based models, calculating exact Shapley values in polynomial time thanks to the tree structure.
Local Fidelity
Measure evaluating the accuracy with which a local explanation model reproduces the predictions of the original model in the neighborhood of a specific instance, crucial for trust in the explanation.
Feature Perturbation
Local analysis technique systematically modifying input features to observe prediction changes, identifying the most sensitive features for a given decision.
Local Feature Attribution
Process quantifying the contribution of each feature to a specific prediction, different from global importance which considers the entire dataset.
Decision Boundary Visualization
Graphical method showing how the model separates classes around a specific prediction, helping to understand local decision mechanisms and prediction robustness.
Local Linear Approximation
Technique locally approximating a non-linear model with a simple linear model around a prediction point, facilitating interpretation of complex decisions.
Local Permutation Importance
Variant of permutation importance that evaluates the impact of randomizing each feature on a specific prediction rather than on the entire dataset.