AI 용어집
인공지능 완전 사전
SHAP (SHapley Additive exPlanations) Summary Plot
A visualization that combines feature importance with their effects, showing each data point as a point on a feature impact plot.
LIME (Local Interpretable Model-agnostic Explanations) Visualization
A graphical representation that shows the most influential local features for a specific prediction, explaining the model through a simple and interpretable surrogate.
Anchors Visualization
A graphical representation of high-fidelity decision rules (anchors) that locally explain a model's prediction, identifying sufficient conditions for a given prediction.
Counterfactual Explanation Visualization
A plot that shows the minimal changes needed to an observation's features to change its prediction, illustrating the model's decision boundary.
Morris Sensitivity Plot
A visualization from the Morris method that assesses the elementary effect of each input parameter on the model output, ranking parameters by global influence and non-linearity.
Sobol Index Visualization
A plot representing Sobol indices, which decompose the variance of the model output into contributions from each feature or their interactions.
What-If Analysis Dashboard
An interactive tool that allows manual manipulation of an observation's feature values to observe in real-time the impact on the model's prediction.
Surrogate Model Visualization
The graphical representation of a simple and interpretable model (such as a decision tree or linear regression) trained to mimic the behavior of a complex (black box) model.