KI-Glossar
Das vollständige Wörterbuch der Künstlichen Intelligenz
Context-Sensitive Interpretability
Approach to AI model explainability that adapts the nature, level of detail, and format of explanations to the specific application domain and characteristics of the target audience.
Contingent Explanation
Method generating explanations whose content and presentation vary dynamically based on the usage context, user's prior knowledge, and industry regulatory constraints.
Semantic Personalization
Technique for adapting the vocabulary and concepts used in a model's explanations to align with the terminology and reference framework specific to an expertise domain.
Interpretability Profiles
Models defining the preferences and comprehension capabilities of different user types (experts, novices, regulators) in order to calibrate explanations generated by an AI system.
Domain-Specific Analogy
Explanation strategy that uses metaphors and comparisons drawn from the application domain to make complex AI model mechanisms intelligible to a non-technical audience.
Contextual Explanation Window
Delimitation of an explanation's scope to focus only on the variables and interactions relevant to a given decision, based on the operational context.
Explainability Ontology
Formal knowledge structure that maps AI model concepts to entities and relationships of a specific domain, facilitating the generation of consistent and relevant explanations.
Adaptive Abstraction Levels
Capability of an explanation system to modulate the granularity of details provided, shifting from a macroscopic view of the model's functioning to a microscopic analysis of its components according to user needs.
Multi-Audience Explanation
Simultaneous generation of multiple versions of the same explanation, each tailored for a distinct audience type (clinician, patient, administrator) while maintaining semantic consistency.
Terminological Anchoring
Process of linking a model's technical characteristics (features, weights) to concrete and familiar concepts and terms from the application domain to improve the readability of explanations.
Explanation Scenarization
Method that structures explanations in the form of a narrative or scenario adapted to the typical workflow and decision-making processes of the target application domain.
Contextual Relevance Filter
Mechanism that evaluates and selects the most significant influencing factors for a specific prediction, based on relevance criteria defined by the business context.
Conditional Explanation Generator
System that produces explanations whose form and content are conditioned by business rules, ethical constraints, and the level of risk associated with the model's decision.
Concept Mapping for AI
Visualization tool that represents the relationships between a model's input variables and key domain concepts, enabling intuitive interpretation by business experts.
Role-Guided Explanation
Approach where the content and purpose of the explanation are determined by the user's functional role within their organization (e.g., validation, audit, corrective action).
Pragmatic Explanation Adaptation
Adjustment of explanations so that they are not only understandable but also directly usable within the framework of actions and decisions specific to the application domain.
Domain-Specific Explanation Language (DSL)
A formal or informal language, with its syntax and grammar, designed to express the reasoning of an AI model in a natural and precise way for practitioners in a specialized field.
Contextual Confidence Calibration
A method for adjusting the presentation of uncertainties and confidence levels of a model based on risk thresholds and accepted standards of evidence in a given domain.