🏠 Home
Prestatietests
📊 Alle benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List applicaties 🎨 Creatieve vrije pagina's 🎯 FSACB - Ultieme showcase 🌍 Vertaalbenchmark
Modellen
🏆 Top 10 modellen 🆓 Gratis modellen 📋 Alle modellen ⚙️ Kilo Code
Bronnen
💬 Promptbibliotheek 📖 AI-woordenlijst 🔗 Nuttige links

AI-woordenlijst

Het complete woordenboek van kunstmatige intelligentie

162
categorieën
2.032
subcategorieën
23.060
termen
📖
termen

Global Feature Importance

Interpretation method that evaluates the average impact of each predictive variable on the entire model, allowing features to be prioritized according to their overall contribution to predictions.

📖
termen

Global SHAP Values

Game theory-based approach that quantifies the average contribution of each feature to the model's predictions across the entire dataset, ensuring mathematical consistency and additivity properties.

📖
termen

Partial Dependence Plot (PDP)

Visualization that shows the average marginal effect of one or two variables on the model's prediction, by marginalizing the effect of other variables to reveal global relationships.

📖
termen

Accumulated Local Effects (ALE)

Interpretation technique that calculates the average effect of features on local predictions, avoiding correlation biases present in PDPs and providing more reliable estimates of global effects.

📖
termen

Global Surrogate Models

Interpretable models (such as decision trees or linear regression) trained to mimic the global behavior of a complex black-box model, offering a simplified but understandable approximation.

📖
termen

Permutation Feature Importance

Agnostic method that evaluates variable importance by measuring the degradation of model performance when a feature's values are randomly permuted, revealing their global contribution.

📖
termen

Model-Agnostic Methods

Interpretation approaches that work with any type of machine learning model without requiring access to internal structure, relying solely on input-output relationships to analyze global behavior.

📖
termen

Global Feature Effects

Comprehensive analysis of each variable's impact on the model's predictions across the entire data space, combining direction, magnitude, and shape of the effect for holistic understanding.

📖
termen

ICE Curves (Individual Conditional Expectation)

Visualization that plots individual model predictions for different values of a feature, allowing observation of effect heterogeneity and aggregating this information for global understanding.

📖
termen

Friedman's H-statistic

Quantitative measure that evaluates the strength of interactions between variables in machine learning models, enabling identification of non-linear dependencies that globally affect predictions.

📖
termen

Global Model Visualization

Set of graphical and visual techniques that synthetically represent the global behavior of a model, including relationships between features, decision patterns, and confidence regions.

📖
termen

Global Feature Contribution

Quantification of the average contribution of each feature to the difference between model predictions and a reference baseline, revealing the global influence of variables on decisions.

📖
termen

Model-Specific Global Interpretation

Interpretation methods specifically designed for certain types of models (such as weights in neural networks or rules in decision trees) to explain their global behavior.

📖
termen

Global Sensitivity Analysis

Systematic study of the variation in model outputs as a function of input variations across the entire input domain, identifying the most influential factors on global behavior.

📖
termen

Global Rule Extraction

Process that generates a set of interpretable rules that capture the global behavior of a complex model, transforming automated predictions into explicit and generalizable knowledge.

🔍

Geen resultaten gevonden