🏠 Beranda
Benchmark
📊 Semua Benchmark 🦖 Dinosaurus v1 🦖 Dinosaurus v2 ✅ Aplikasi To-Do List 🎨 Halaman Bebas Kreatif 🎯 FSACB - Showcase Utama 🌍 Benchmark Terjemahan
Model
🏆 Top 10 Model 🆓 Model Gratis 📋 Semua Model ⚙️ Kilo Code
Sumber Daya
💬 Perpustakaan Prompt 📖 Glosarium AI 🔗 Tautan Berguna

Glosarium AI

Kamus lengkap Kecerdasan Buatan

162
kategori
2.032
subkategori
23.060
istilah
📖
istilah

Anchors

Local explanation approach that identifies high-precision IF-THEN rules (anchors) that anchor the prediction locally, ensuring that modifications to uncovered features do not affect the prediction.

📖
istilah

Counterfactual Explanations

Method generating minimal hypothetical scenarios that show how to change input features to obtain a different prediction, thus explaining the model's decision boundaries.

📖
istilah

Local Surrogate Models

Simplified models trained to approximate the behavior of a complex model only in the neighborhood of a specific prediction, providing locally interpretable explanations.

📖
istilah

LRP (Layer-wise Relevance Propagation)

Backpropagation technique that redistributes the final prediction of the neural network through its layers down to the input features, quantifying their individual contribution.

📖
istilah

DeepLIFT

Relevance attribution method for deep neural networks that compares each neuron's activation to its reference state, calculating contributions by difference rather than gradients.

📖
istilah

Integrated Gradients

Attribution technique that integrates gradients along a path from a baseline reference to the current input, ensuring axiomatic properties like sensitivity and implementation invariance.

📖
istilah

Occlusion Sensitivity

Local explanation approach that systematically masks regions of the input and observes the impact on prediction, identifying critical areas for the model's decision.

📖
istilah

Influence Functions

Analytical technique estimating how model parameters and its predictions would change if a specific training point were modified, identifying influential data for a prediction.

📖
istilah

ICE (Individual Conditional Expectation)

Method visualizing how a model's prediction changes for an individual observation when the value of a feature varies, revealing heterogeneous effects and interactions.

📖
istilah

Kernel SHAP

SHAP variant using a kernel approach to estimate Shapley values without enumerating all coalitions, applicable to any machine learning model.

📖
istilah

Tree SHAP

Optimized implementation of SHAP specifically designed for tree-based models, calculating exact Shapley values in polynomial time thanks to the tree structure.

📖
istilah

Local Fidelity

Measure evaluating the accuracy with which a local explanation model reproduces the predictions of the original model in the neighborhood of a specific instance, crucial for trust in the explanation.

📖
istilah

Feature Perturbation

Local analysis technique systematically modifying input features to observe prediction changes, identifying the most sensitive features for a given decision.

📖
istilah

Local Feature Attribution

Process quantifying the contribution of each feature to a specific prediction, different from global importance which considers the entire dataset.

📖
istilah

Decision Boundary Visualization

Graphical method showing how the model separates classes around a specific prediction, helping to understand local decision mechanisms and prediction robustness.

📖
istilah

Local Linear Approximation

Technique locally approximating a non-linear model with a simple linear model around a prediction point, facilitating interpretation of complex decisions.

📖
istilah

Local Permutation Importance

Variant of permutation importance that evaluates the impact of randomizing each feature on a specific prediction rather than on the entire dataset.

🔍

Tidak ada hasil ditemukan