Glossario IA
Il dizionario completo dell'Intelligenza Artificiale
Local Interpretable Model-agnostic Explanations (LIME)
Explanation techniques that analyze individual predictions to understand how a model makes a specific decision.
SHAP methods (SHapley Additive exPlanations)
Game theory-based approach to quantify the impact of each feature on model predictions.
Feature Analysis
Systematic study of the importance, relevance, and impact of input variables on model predictions.
Model Visualization
A set of graphical techniques for visually representing the internal behavior and decisions of AI models.
Counterfactual Analysis
Generating alternative scenarios to understand which minimal changes to inputs would change the model's prediction.
Interpretation Rules
Extraction of simple and understandable logical rules from complex models such as neural networks.
Post-hoc interpretability
Methods applied after training to explain the decisions of initially non-interpretable models.
Interpretability by design
Approaches where transparency is integrated from the model's design, creating naturally explainable algorithms.
Causal Explanations
Analysis of cause-and-effect relationships in model decisions to go beyond simple correlations.
Algorithmic Bias Analysis
Detection, quantification and explanation of systematic biases in AI model predictions.
Explanation fidelity metrics
Set of measures to evaluate the quality and accuracy of explanations generated by interpretability techniques.
Interpretable decision trees
Tree structures specifically designed to provide transparent and easily understandable decisions.
Saliency Maps
Visualization techniques that highlight the most influential regions or features in the input data.
Multimodal explanations
Interpretation approaches tailored to models processing multiple data types simultaneously (text, image, audio).
Transparency audit
Systematic and independent evaluation of the transparency, fairness, and reliability of AI models.