🏠 Ana Sayfa
Benchmarklar
📊 Tüm Benchmarklar 🦖 Dinozor v1 🦖 Dinozor v2 ✅ To-Do List Uygulamaları 🎨 Yaratıcı Serbest Sayfalar 🎯 FSACB - Nihai Gösteri 🌍 Çeviri Benchmarkı
Modeller
🏆 En İyi 10 Model 🆓 Ücretsiz Modeller 📋 Tüm Modeller ⚙️ Kilo Code
Kaynaklar
💬 Prompt Kütüphanesi 📖 YZ Sözlüğü 🔗 Faydalı Bağlantılar

YZ Sözlüğü

Yapay Zekanın tam sözlüğü

162
kategoriler
2.032
alt kategoriler
23.060
terimler
📖
terimler

Anchors

Local explanation approach that identifies high-precision IF-THEN rules (anchors) that anchor the prediction locally, ensuring that modifications to uncovered features do not affect the prediction.

📖
terimler

Counterfactual Explanations

Method generating minimal hypothetical scenarios that show how to change input features to obtain a different prediction, thus explaining the model's decision boundaries.

📖
terimler

Local Surrogate Models

Simplified models trained to approximate the behavior of a complex model only in the neighborhood of a specific prediction, providing locally interpretable explanations.

📖
terimler

LRP (Layer-wise Relevance Propagation)

Backpropagation technique that redistributes the final prediction of the neural network through its layers down to the input features, quantifying their individual contribution.

📖
terimler

DeepLIFT

Relevance attribution method for deep neural networks that compares each neuron's activation to its reference state, calculating contributions by difference rather than gradients.

📖
terimler

Integrated Gradients

Attribution technique that integrates gradients along a path from a baseline reference to the current input, ensuring axiomatic properties like sensitivity and implementation invariance.

📖
terimler

Occlusion Sensitivity

Local explanation approach that systematically masks regions of the input and observes the impact on prediction, identifying critical areas for the model's decision.

📖
terimler

Influence Functions

Analytical technique estimating how model parameters and its predictions would change if a specific training point were modified, identifying influential data for a prediction.

📖
terimler

ICE (Individual Conditional Expectation)

Method visualizing how a model's prediction changes for an individual observation when the value of a feature varies, revealing heterogeneous effects and interactions.

📖
terimler

Kernel SHAP

SHAP variant using a kernel approach to estimate Shapley values without enumerating all coalitions, applicable to any machine learning model.

📖
terimler

Tree SHAP

Optimized implementation of SHAP specifically designed for tree-based models, calculating exact Shapley values in polynomial time thanks to the tree structure.

📖
terimler

Local Fidelity

Measure evaluating the accuracy with which a local explanation model reproduces the predictions of the original model in the neighborhood of a specific instance, crucial for trust in the explanation.

📖
terimler

Feature Perturbation

Local analysis technique systematically modifying input features to observe prediction changes, identifying the most sensitive features for a given decision.

📖
terimler

Local Feature Attribution

Process quantifying the contribution of each feature to a specific prediction, different from global importance which considers the entire dataset.

📖
terimler

Decision Boundary Visualization

Graphical method showing how the model separates classes around a specific prediction, helping to understand local decision mechanisms and prediction robustness.

📖
terimler

Local Linear Approximation

Technique locally approximating a non-linear model with a simple linear model around a prediction point, facilitating interpretation of complex decisions.

📖
terimler

Local Permutation Importance

Variant of permutation importance that evaluates the impact of randomizing each feature on a specific prediction rather than on the entire dataset.

🔍

Sonuç bulunamadı