Glossario IA
Il dizionario completo dell'Intelligenza Artificiale
Model Drift
The phenomenon of gradual performance degradation of an AI model in production due to changes in input data or relationships between variables. Drift requires continuous monitoring and potentially model retraining.
Data Drift
Statistical change in the distribution of a model's input data compared to the original training data. This phenomenon can negatively affect predictions and requires proactive detection.
Performance Monitoring
Continuous monitoring of model performance metrics in production including accuracy, precision, recall and other relevant KPIs. This monitoring allows for quick detection of performance anomalies.
Model Explainability
The ability to understand and interpret AI model decisions, essential for trust and regulatory compliance. Techniques like SHAP or LIME enable explanation of individual predictions.
Feature Importance Tracking
Continuous monitoring of the relative importance of features used by the model for its predictions. This monitoring helps identify changes in the model's decision patterns.
Prediction Confidence Score
Quantitative metric indicating the model's level of certainty about each individual prediction. Low confidence scores can signal risky predictions requiring human intervention.
Model Degradation
Gradual loss of effectiveness of a model in production due to various factors such as data aging or evolving business context. Degradation requires proactive model maintenance.
Real-time Inference Monitoring
Instant monitoring of predictions and performance metrics during real-time inference. This monitoring enables immediate detection of anomalies and system failures.
Alerting System
Automated infrastructure generating notifications when model metrics exceed predefined thresholds. Alerts enable rapid intervention before degradations significantly impact business.
Baseline Metrics
Performance benchmarks established during model validation serving as a comparison point for production monitoring. These baselines allow objective quantification of performance degradation.
Canary Deployment
Progressive deployment strategy where the new model is tested on a small percentage of traffic before full deployment. This method minimizes risks associated with new model versions.
Observability Pipeline
Infrastructure for collecting, processing, and storing logs, metrics, and traces of models in production. This pipeline provides complete visibility into system behavior.
Drift Detection Algorithm
Statistical or machine learning algorithms that automatically identify changes in data distributions or model performance. These tools enable proactive drift detection.
Model Health Dashboard
Centralized visual interface displaying key performance metrics, alerts, and the overall health status of models in production. This tool facilitates decision-making for MLOps teams.
Anomaly Detection
Process of automatically identifying unusual behaviors or aberrant predictions in model outputs. This detection allows isolation of cases requiring in-depth investigation.
Performance Regression
Measurable decrease in a model's performance compared to its initial reference metrics. Regression can be gradual or sudden and requires root cause analysis.
Model Governance
Set of policies, procedures, and controls ensuring compliance, traceability, and auditability of models throughout their lifecycle. Governance ensures the reliability and ethics of AI systems.
Latency Monitoring
Monitoring of model prediction response time in production, critical for real-time applications. Continuous monitoring ensures compliance with SLAs and user experience.
Throughput Tracking
Measurement of the volume of predictions processed per unit of time, essential for evaluating the system's load capacity. Throughput tracking helps to size infrastructure resources.