Słownik AI
Kompletny słownik sztucznej inteligencji
SHAP Global Values
The aggregation of SHAP (SHapley Additive exPlanations) values across the entire dataset to determine the average impact of each feature on the model's output, providing a consistent and theoretically grounded feature importance.
LIME Global
An extension of LIME (Local Interpretable Model-agnostic Explanations) that aims to create a global interpretable model by sampling multiple local instances and combining their explanations to reveal the model's general trends.
Model-agnostic Global Explanations
A set of interpretation techniques that can be applied to any type of model, regardless of its complexity or underlying algorithm, to explain its global behavior without requiring access to its internal parameters.
Functional ANOVA
A variance decomposition method that attributes the variance of a model's output to the main effects of each feature and their interactions, providing a global and hierarchical view of variable contributions.
DeepLIFT Global Importance
An interpretation method for deep neural networks that calculates global importance scores by aggregating the contributions of each neuron/activation across the entire dataset, based on the difference from a reference.
Integrated Gradients Global
The application of the integrated gradients method on a dataset to obtain an average feature attribution, revealing the variables that have the greatest cumulative impact on the model's predictions in a global manner.