🏠 Hem
Benchmarkar
📊 Alla benchmarkar 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List-applikationer 🎨 Kreativa fria sidor 🎯 FSACB - Ultimata uppvisningen 🌍 Översättningsbenchmark
Modeller
🏆 Topp 10 modeller 🆓 Gratis modeller 📋 Alla modeller ⚙️ Kilo Code
Resurser
💬 Promptbibliotek 📖 AI-ordlista 🔗 Användbara länkar

AI-ordlista

Den kompletta ordlistan över AI

162
kategorier
2 032
underkategorier
23 060
termer
📖
termer

Anchors

A method that identifies simple and sufficient decision rules (anchors) that explain a model's prediction for a given instance with high fidelity.

📖
termer

Shapley Value

A concept from game theory that measures the average marginal contribution of a player (feature) across all possible coalitions, serving as the foundation for SHAP.

📖
termer

Input Perturbation

The process of creating slight variations in the input data to observe the effect on the model's prediction, used by methods like LIME to build a local neighborhood.

📖
termer

Fidelity

A metric evaluating how faithfully a local explanation (like LIME's simple model) mimics the behavior of the complex model in its neighborhood.

📖
termer

Model-Agnostic Explanation

An interpretability approach that treats the predictive model as a black box, interacting only with its inputs and outputs to generate explanations.

📖
termer

Saliency Map

A visualization that highlights the pixels or features of an input that most influenced a model's prediction, often obtained by computing the gradient.

📖
termer

Kernel Neighborhood

In LIME, a function that defines the proximity between the original instance and the perturbed instances, weighting their influence in the local explanation model.

📖
termer

Explanation Rule

A simple logical condition (e.g., IF feature_A > X AND feature_B < Y) that captures the primary reason for a specific prediction, typical of methods like Anchors.

📖
termer

Post-hoc Interpretability

The analysis of a model after its training to understand its decisions, as opposed to intrinsically interpretable models.

📖
termer

SHAP Kernel Explainer

A SHAP implementation using kernel weighting to estimate Shapley values, making it model-agnostic but potentially slower.

📖
termer

SHAP Tree Explainer

An optimized SHAP algorithm that calculates exact Shapley values for tree-based models (like XGBoost, LightGBM) very efficiently.

📖
termer

Local Surrogate Explanation

The fundamental principle of LIME, consisting of training a simple and interpretable model (surrogate) to approximate the behavior of the complex model locally.

🔍

Inga resultat hittades