🏠 Home
Prestatietests
📊 Alle benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List applicaties 🎨 Creatieve vrije pagina's 🎯 FSACB - Ultieme showcase 🌍 Vertaalbenchmark
Modellen
🏆 Top 10 modellen 🆓 Gratis modellen 📋 Alle modellen ⚙️ Kilo Code
Bronnen
💬 Promptbibliotheek 📖 AI-woordenlijst 🔗 Nuttige links

AI-woordenlijst

Het complete woordenboek van kunstmatige intelligentie

162
categorieën
2.032
subcategorieën
23.060
termen
📖
termen

Explanation Fidelity

Metric quantifying the correspondence between the black box model's predictions and those of the interpretable model used to generate explanations. High fidelity indicates that the explanation faithfully represents the original model's behavior in the considered local region.

📖
termen

Explanation Stability

Indicator assessing the consistency of explanations generated for similar instances or for the same instance with slight variations. Stability ensures that explanations do not vary erratically in response to minor changes in input data.

📖
termen

Explanatory Completeness

Metric measuring an explanation's ability to capture all relevant factors influencing the model's decision. A complete explanation should integrate all significant features without omitting crucial elements.

📖
termen

Semantic Relevance

Evaluation of the consistency between the generated explanation and domain knowledge or expected human reasoning. This metric quantifies whether the produced explanations align with domain experts' logic and intuition.

📖
termen

Explanation Compactness

Measure of an explanation's conciseness, assessing the ratio between the amount of information provided and its structural complexity. A compact explanation prioritizes the most relevant elements while minimizing informational redundancy.

📖
termen

Explanatory Robustness

Ability of an explanation to maintain its validity in the face of perturbations or adversarial attacks on input data. This metric evaluates the resistance of explanations to malicious manipulations aimed at misleading.

📖
termen

Granularity Level

Level of detail at which an explanation operates, ranging from global explanations (entire model) to local explanations (specific instance). Granularity determines the precision and specificity of the provided interpretation.

📖
termen

Inter-explanation Consistency

Metric evaluating the logical consistency between different explanations generated for varied but semantically similar instances. This metric ensures that explanations follow reasonable and non-contradictory patterns.

📖
termen

Perceived Usability

Qualitative indicator measuring the ease with which users can understand, interpret, and apply the generated explanations. Perceived usability assesses the adequacy between the technical complexity of the explanation and the user's cognitive abilities.

📖
termen

Explanatory Verifiability

Ability to independently confirm or refute the validity of the explanations provided by the model. Verifiability allows users to validate the consistency of explanations against external knowledge or empirical tests.

📖
termen

Explanation Gap

Quantitative difference between the inherent complexity of the model and the simplicity of its explanation. A high gap may indicate significant information loss during the explanatory simplification process.

📖
termen

Causal Specificity

Measure assessing whether an explanation correctly identifies cause-effect relationships rather than mere correlations. Causal specificity distinguishes factors that actually influence the decision from those that are merely co-occurring.

📖
termen

Explanatory Generalization

Ability of a local explanation to apply consistently to other similar instances in the dataset. This metric evaluates whether the identified explanatory patterns can be extrapolated beyond the specific case studied.

📖
termen

Explanatory Confidence

Quantified level of certainty associated with an explanation, indicating the probability that the explanation is correct. Explanatory confidence allows users to assess the reliability of the interpretations provided by the system.

📖
termen

Explanation Fairness

Metric evaluating whether the generated explanations treat different demographic groups or subpopulations fairly. Explanation fairness ensures the absence of discriminatory bias in how decisions are justified.

📖
termen

Explanatory Coverage

Proportion of the feature space or instances for which the model can generate valid explanations. High coverage ensures that the explanation system can operate on the majority of cases encountered in practice.

📖
termen

Explanatory Latency

Computational time required to generate an explanation after the model has produced its prediction. This metric is crucial for real-time applications where explanations must be provided quickly.

📖
termen

Counterfactual Fidelity

Specific measure evaluating the quality of counterfactual explanations in terms of the minimality of required changes and the plausibility of generated scenarios. This metric ensures that the proposed counterfactuals are realistic and actionable.

🔍

Geen resultaten gevonden