Thuật ngữ AI
Từ điển đầy đủ về Trí tuệ nhân tạo
Decision Tree Visualization
Tree-like graphical representation of a model's decisions showing decision nodes, condition branches, and result leaves to interpret the model's logical reasoning process.
Feature Importance Plot
Ordered bar chart quantifying the relative influence of each input variable on the model's predictions, facilitating the identification of determining factors.
SHAP Summary Plot
Graph combining SHAP values and feature importance to reveal the directional impact of each feature on predictions across the entire dataset.
Class Activation Maps
Weighted visualizations of CNN feature maps indicating spatially where the model focuses to identify a specific class in an image.
Feature Interaction Plot
3D or heatmap visualization showing how the effect of one feature on the prediction changes based on the values of another feature.
Counterfactual Visualization
Graphical representation of the minimal changes needed to input data to change the model's prediction, helping to understand decision thresholds.
Model-Agnostic Visualizations
Universal visual techniques working with any type of model without requiring access to its internal parameters, such as PDP or LIME, for generalized interpretability.
Attention Visualization
Graphs showing attention weights between different parts of input data (words in a sentence, regions in an image) to reveal relationships learned by the model.
Gradient-Based Explanations
Visualizations using gradients of the output with respect to inputs to quantify and display the influence of each feature on the final prediction.
Decision Flow Charts
Flow diagrams sequentially representing the decision steps of a model with standardized symbols for processes, decisions, and outcomes.
Confidence Intervals Visualization
Graphs showing confidence intervals around predictions to represent model uncertainty and the reliability of its decisions.
Rule Extraction Visualization
Graphical representation of logical rules extracted from complex models (black boxes) in the form of interpretable diagrams or decision tables.
Instance-Based Explanations
Customized visualizations for a specific prediction showing influential features and similar examples that led to that particular decision.
Global vs Local Explanations
Visual comparison between global explanations (model's general behavior) and local explanations (individual predictions) to identify consistencies and anomalies.
Tree Interpreter
Visualization tool decomposing each decision tree prediction into contributions of individual features, showing how each variable affects the final outcome.