AI 용어집
인공지능 완전 사전
Data Drift
Change in the statistical distribution of input data in production compared to training data, which can degrade the model's performance. It is crucial to detect it to maintain the relevance of predictions.
Concept Drift
Evolution of the relationship between input variables and the target variable, where the meaning or context of the problem changes. This type of drift is more insidious because the input distributions can remain stable.
Performance Monitoring
Continuous tracking of model evaluation metrics (accuracy, recall, F1-score, etc.) on real data to identify any degradation. It allows for triggering alerts and retraining actions.
ML Dashboard
Centralized visualization interface aggregating key monitoring metrics, drift alerts, and the health status of models in production. It facilitates decision-making for MLOps teams.
Automated Alerting
Notification system triggered by predefined thresholds on performance metrics or drift indicators. It ensures rapid responsiveness to model behavioral anomalies.
Stability Metric
Indicator quantifying the similarity between the distribution of current data and the reference (training) data. Metrics like Kullback-Leibler Divergence or Population Stability Index are commonly used.
Feature Importance Analysis
Monitoring the evolution of the impact of each input variable on the model's predictions. A sudden change can indicate data drift or a change in the model's behavior.
Explainability in Production
Monitoring the explanations of predictions (e.g., SHAP, LIME) to ensure the model still uses the same logic and features. This is essential for trust and auditability of critical systems.
Prediction Anomaly Detection
Identification of outlier predictions or abnormally low confidence, which can signal model degradation or the presence of data outside its known distribution. It's a safety layer for automation.
Prediction Latency
Metric measuring the time elapsed between receiving a request and the model returning the prediction. Its monitoring is vital for real-time applications where high latency impacts the user experience.
Production Bias
Continuous monitoring of the model's fairness and bias indicators on real-world data to ensure it does not discriminate against certain populations. Monitoring is necessary because bias can emerge or be amplified with drift.
Structured Logging
Recording of inputs, predictions, metadata, and performance metrics in a structured format (e.g., JSON). This facilitates post-mortem analysis, debugging, and feeding monitoring pipelines.
Model Versioning
Tracking and managing different versions of a trained model, often via a Model Registry. Monitoring must be able to distinguish the performance of each deployed version.
Feedback Loop
Process of collecting feedback on the model's predictions (corrections, annotations) to feed future training cycles. Monitoring the quality and volume of this feedback is a system health indicator.