Słownik AI
Kompletny słownik sztucznej inteligencji
Model-agnostic methods
Interpretation techniques that can be applied to any type of AI model without requiring knowledge of its internal architecture.
Model-specific methods
Interpretation approaches designed for specific types of models, leveraging their internal structure to provide more accurate explanations.
Input perturbation
Technique involving slight modifications to input data to observe the impact on model predictions and identify the most influential features.
Feature attribution
Process of assigning importance scores to each input variable to quantify its contribution to a specific model prediction.
Partial dependence plots
Visualization showing the marginal effect of one or two features on the model prediction, by marginalizing the effect of other variables.
Saliency maps
Visual representations highlighting the most influential regions or pixels in input data (especially images) for a given prediction.
Rule extraction
Technique extracting understandable logical rules from complex models, allowing approximation of their behavior with simple conditions.
Global feature importance
Aggregated measure of each variable's influence on all model predictions, enabling identification of overall determining factors.
Critical path analysis
Identification of decision or feature sequences that lead to specific predictions, particularly useful in deep neural networks.
Proxy models
Simple and interpretable models trained to imitate the behavior of a complex model, serving as an approximation to explain its decisions.