🏠 Trang chủ
Benchmark
📊 Tất cả benchmark 🦖 Khủng long v1 🦖 Khủng long v2 ✅ Ứng dụng To-Do List 🎨 Trang tự do sáng tạo 🎯 FSACB - Trình diễn cuối cùng 🌍 Benchmark dịch thuật
Mô hình
🏆 Top 10 mô hình 🆓 Mô hình miễn phí 📋 Tất cả mô hình ⚙️ Kilo Code
Tài nguyên
💬 Thư viện prompt 📖 Thuật ngữ AI 🔗 Liên kết hữu ích

Thuật ngữ AI

Từ điển đầy đủ về Trí tuệ nhân tạo

162
danh mục
2.032
danh mục con
23.060
thuật ngữ
📖
thuật ngữ

Anchors

A method that identifies simple and sufficient decision rules (anchors) that explain a model's prediction for a given instance with high fidelity.

📖
thuật ngữ

Shapley Value

A concept from game theory that measures the average marginal contribution of a player (feature) across all possible coalitions, serving as the foundation for SHAP.

📖
thuật ngữ

Input Perturbation

The process of creating slight variations in the input data to observe the effect on the model's prediction, used by methods like LIME to build a local neighborhood.

📖
thuật ngữ

Fidelity

A metric evaluating how faithfully a local explanation (like LIME's simple model) mimics the behavior of the complex model in its neighborhood.

📖
thuật ngữ

Model-Agnostic Explanation

An interpretability approach that treats the predictive model as a black box, interacting only with its inputs and outputs to generate explanations.

📖
thuật ngữ

Saliency Map

A visualization that highlights the pixels or features of an input that most influenced a model's prediction, often obtained by computing the gradient.

📖
thuật ngữ

Kernel Neighborhood

In LIME, a function that defines the proximity between the original instance and the perturbed instances, weighting their influence in the local explanation model.

📖
thuật ngữ

Explanation Rule

A simple logical condition (e.g., IF feature_A > X AND feature_B < Y) that captures the primary reason for a specific prediction, typical of methods like Anchors.

📖
thuật ngữ

Post-hoc Interpretability

The analysis of a model after its training to understand its decisions, as opposed to intrinsically interpretable models.

📖
thuật ngữ

SHAP Kernel Explainer

A SHAP implementation using kernel weighting to estimate Shapley values, making it model-agnostic but potentially slower.

📖
thuật ngữ

SHAP Tree Explainer

An optimized SHAP algorithm that calculates exact Shapley values for tree-based models (like XGBoost, LightGBM) very efficiently.

📖
thuật ngữ

Local Surrogate Explanation

The fundamental principle of LIME, consisting of training a simple and interpretable model (surrogate) to approximate the behavior of the complex model locally.

🔍

Không tìm thấy kết quả