KI-Glossar
Das vollständige Wörterbuch der Künstlichen Intelligenz
Fidelity
Metric quantifying the ability of an explanation to faithfully represent the actual behavior of the underlying model, evaluated by the correlation between the explanation's predictions and those of the original model.
Stability
Indicator measuring the consistency of explanations generated for similar or slightly perturbed inputs, ensuring the robustness of interpretations in the face of minor data variations.
Comprehensibility
Metric evaluating the ease with which a human can interpret and understand an explanation, often measured by the linguistic or structural complexity of the explanatory representation.
Relevance
Score quantifying the degree of relevance of the features highlighted in the explanation relative to the model's prediction, evaluated by the importance attributed to influential variables.
Completeness
Measure assessing whether an explanation captures all relevant factors that contributed to the model's decision, without omitting crucial information for complete interpretation.
Consistency
Indicator quantifying the logical consistency between different explanations generated by the same model, ensuring the absence of contradictions in interpretations of similar predictions.
Sensitivity
Score evaluating how explanations vary based on changes in input features, measuring the responsiveness of explanatory methods to data modifications.
Robustness
Measure quantifying the resistance of explanations to malicious perturbations or adversarial attacks, ensuring the reliability of interpretations under degraded conditions.
Confidence Score
Numerical indicator assessing the degree of certainty associated with an explanation, reflecting the reliability of the interpretation provided by the AI model.
Explanation Depth
Metric measuring the level of detail and granularity of an explanation, quantifying the depth to which the model's internal mechanisms are revealed.
Coverage
Score evaluating the proportion of model instances or features that are effectively explained by the explainability method, measuring the scope of applicability.
Computation Time
Quantitative metric measuring the time complexity required to generate explanations, directly impacting the practicality and scalability of explanatory methods.
Explanatory Accuracy
Indicator assessing the quantitative correctness of explanations, measuring the adequacy between the assigned importance weights and the actual influence of features on the prediction.
Granularity
Metric defining the level of detail of explanations, ranging from global model interpretations to local explanations specific to each individual prediction.
Local Fidelity
Specific metric measuring the accuracy of an explanation for an individual prediction, evaluating the correspondence between the model's local behavior and its explanatory approximation.
Global Fidelity
Indicator quantifying the ability of an explanation to faithfully represent the model's global behavior across all predictions, measuring the accuracy of the global approximation.