YZ Sözlüğü
Yapay Zekanın tam sözlüğü
LIME Methods
Local explanation technique that approximates the behavior of a complex model using a simple interpretable model around a specific prediction.
SHAP Values
Game theory-based approach to quantify the impact of each feature on the model's final prediction.
Feature attribution
Set of techniques that assign an importance score to each input variable to explain its contribution to the model's decision.
Analysis of counterexamples
Methods that generate modified scenarios showing which minimal conditions must change to alter the model's prediction.
Attention maps
Visualization techniques that show the areas or elements on which a deep learning model focuses to make a decision.
Surrogate models
Creation of simple and interpretable models that mimic the behavior of complex black-box models while remaining understandable.
Rule extraction
Process that converts decisions from complex models into sets of logical rules easily interpretable by humans.
Causal Interpretation
Methods that distinguish causal relationships from mere correlations to provide deeper and more reliable explanations.
Sensitivity Analysis
Systematic evaluation of the impact of input variable variations on model predictions.
Explainability Evaluation Metrics
Quantitative indicators to measure the quality, fidelity, and usefulness of explanations generated by AI models.
Explanations by prototypes
Approach that explains predictions by identifying the most representative examples or prototypes in the data space.
Decision Visualization
Graphical techniques that visually represent the model's decision-making process to facilitate human understanding.