YZ Sözlüğü
Yapay Zekanın tam sözlüğü
Feature Interaction Strength
Metric quantifying the intensity of interaction effects between two or more features in a predictive model. Interaction is measured by the difference between the combined effect and the sum of individual feature effects.
Friedman's H-statistic
Quantitative measure of feature interaction based on the partial variance of model predictions. The H-statistic ranges from 0 (no interaction) to 1 (strong interaction) and can be calculated for pairwise or higher-order interactions.
Stability Metric
Indicator measuring the consistency of explanations generated by an interpretation method in the face of slight variations in input data. Good stability ensures that explanations do not vary erratically for similar instances.
Comprehensibility Index
Composite score evaluating how easily a human can understand an explanation or model, based on factors such as syntactic complexity, vocabulary size, and logical structure. This index combines objective and subjective readability metrics.
Interpretability-Accuracy Trade-off
Inverse relationship between a model's ability to be interpreted by a human and its raw predictive performance. This trade-off is quantified by various metrics allowing an optimal balance to be found according to application domain requirements.
Post-hoc Explainability Score
Quantitative evaluation of the quality of explanations generated after model training, combining fidelity, stability, and comprehensibility. This composite score allows different explanation techniques to be compared on the same model.
Intrinsic Interpretability Measure
Metric evaluating the degree of inherent interpretability of a model based on its algorithmic structure rather than external explanations. This measure considers factors such as linearity, monotonicity, and model sparsity.
Local Fidelity Metric
Indicator measuring the accuracy of a local explanation in its ability to faithfully represent model behavior in the immediate neighborhood of an instance. This metric evaluates the validity of local approximations used in methods such as LIME or Anchors.
Explanation Coverage
Proportion du jeu de données pour laquelle une méthode d'explication peut générer des interprétations valides et cohérentes. La couverture mesure la généralisabilité d'une technique d'interprétation et son applicabilité à différentes régions de l'espace des caractéristiques.
Rule-based Interpretability Score
Métrique spécifique aux modèles basés sur de règles évaluant la qualité des explications selon le nombre de règles, leur longueur moyenne et leur chevauchement. Ce score favorise les ensembles de règles concis, non redondants et facilement compréhensibles.
Consistency Measure
Indicateur évaluant si des explications similaires sont générées pour des instances avec des prédictions identiques ou similaires. La cohérence est cruciale pour maintenir la confiance dans les explications à travers différentes régions de l'espace de décision.