🏠 Trang chủ
Benchmark
📊 Tất cả benchmark 🦖 Khủng long v1 🦖 Khủng long v2 ✅ Ứng dụng To-Do List 🎨 Trang tự do sáng tạo 🎯 FSACB - Trình diễn cuối cùng 🌍 Benchmark dịch thuật
Mô hình
🏆 Top 10 mô hình 🆓 Mô hình miễn phí 📋 Tất cả mô hình ⚙️ Kilo Code
Tài nguyên
💬 Thư viện prompt 📖 Thuật ngữ AI 🔗 Liên kết hữu ích

Thuật ngữ AI

Từ điển đầy đủ về Trí tuệ nhân tạo

162
danh mục
2.032
danh mục con
23.060
thuật ngữ
📖
thuật ngữ

Anchors

A local explanation method that provides simple and sufficient decision rules (anchors) that probabilistically guarantee the same prediction for a neighborhood of the observation, offering more stable interpretation than LIME.

📖
thuật ngữ

Shapley Values

The fundamental theoretical concept of SHAP, representing the average marginal contribution of a feature across all possible feature coalitions in a model, ensuring a fair distribution of importance.

📖
thuật ngữ

Local Surrogate Explanation

An approach that trains a simple and interpretable model (such as a decision tree or linear regression) to approximate the behavior of a complex model only in the restricted neighborhood of a specific prediction.

📖
thuật ngữ

Observation Neighborhood

The data space defined around a specific observation, used by local interpretation methods to generate variations and approximate the model's behavior in this restricted region.

📖
thuật ngữ

Local Fidelity

A metric evaluating the accuracy with which a local explanation (such as a surrogate model) reproduces the original model's predictions in the neighborhood of the explained observation.

📖
thuật ngữ

TreeSHAP

A variant of the SHAP algorithm optimized for decision tree-based models, capable of calculating exact Shapley values much faster by leveraging the intrinsic structure of these models.

📖
thuật ngữ

KernelSHAP

A SHAP implementation that uses a weighting function (kernel) to estimate Shapley values approximately, making it applicable to any model in an agnostic manner but with higher computational cost.

📖
thuật ngữ

DeepSHAP

An adaptation of SHAP specifically designed for deep learning models, which combines Shapley values with backpropagation techniques to efficiently compute feature attributions.

📖
thuật ngữ

Ad-Hoc Explanation

A locally generated explanation specifically for a single instance, without claiming generalization, unlike global explanations that seek to describe the model's overall behavior.

📖
thuật ngữ

Local Feature Influence

The measure of the impact of a specific feature on the prediction of a single observation, quantifying how varying this feature would change the model's outcome for this specific case.

📖
thuật ngữ

Individual Prediction Diagnosis

The complete process of analyzing a single prediction using various local methods (LIME, SHAP, counterfactuals) to understand the underlying mechanisms, validate the decision, and identify potential biases.

📖
thuật ngữ

Local Explanation Stability

The property of a local interpretation method to produce consistent explanations for very similar observations, a critical issue for the trust and reliability of individual diagnostics.

📖
thuật ngữ

Integrated Gradients

A local attribution method for differentiable models that calculates the importance of a feature by integrating the gradient of the output with respect to that feature along a path from a baseline to the input.

📖
thuật ngữ

Baseline

A reference point (often a zero vector or average instance) used in attribution methods like integrated gradients to measure the contribution of a feature relative to a neutral or expected state.

🔍

Không tìm thấy kết quả