🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Data Drift

Change in the statistical distribution of input data in production compared to training data, which can degrade the model's performance. It is crucial to detect it to maintain the relevance of predictions.

📖
용어

Concept Drift

Evolution of the relationship between input variables and the target variable, where the meaning or context of the problem changes. This type of drift is more insidious because the input distributions can remain stable.

📖
용어

Performance Monitoring

Continuous tracking of model evaluation metrics (accuracy, recall, F1-score, etc.) on real data to identify any degradation. It allows for triggering alerts and retraining actions.

📖
용어

ML Dashboard

Centralized visualization interface aggregating key monitoring metrics, drift alerts, and the health status of models in production. It facilitates decision-making for MLOps teams.

📖
용어

Automated Alerting

Notification system triggered by predefined thresholds on performance metrics or drift indicators. It ensures rapid responsiveness to model behavioral anomalies.

📖
용어

Stability Metric

Indicator quantifying the similarity between the distribution of current data and the reference (training) data. Metrics like Kullback-Leibler Divergence or Population Stability Index are commonly used.

📖
용어

Feature Importance Analysis

Monitoring the evolution of the impact of each input variable on the model's predictions. A sudden change can indicate data drift or a change in the model's behavior.

📖
용어

Explainability in Production

Monitoring the explanations of predictions (e.g., SHAP, LIME) to ensure the model still uses the same logic and features. This is essential for trust and auditability of critical systems.

📖
용어

Prediction Anomaly Detection

Identification of outlier predictions or abnormally low confidence, which can signal model degradation or the presence of data outside its known distribution. It's a safety layer for automation.

📖
용어

Prediction Latency

Metric measuring the time elapsed between receiving a request and the model returning the prediction. Its monitoring is vital for real-time applications where high latency impacts the user experience.

📖
용어

Production Bias

Continuous monitoring of the model's fairness and bias indicators on real-world data to ensure it does not discriminate against certain populations. Monitoring is necessary because bias can emerge or be amplified with drift.

📖
용어

Structured Logging

Recording of inputs, predictions, metadata, and performance metrics in a structured format (e.g., JSON). This facilitates post-mortem analysis, debugging, and feeding monitoring pipelines.

📖
용어

Model Versioning

Tracking and managing different versions of a trained model, often via a Model Registry. Monitoring must be able to distinguish the performance of each deployed version.

📖
용어

Feedback Loop

Process of collecting feedback on the model's predictions (corrections, annotations) to feed future training cycles. Monitoring the quality and volume of this feedback is a system health indicator.

🔍

결과를 찾을 수 없습니다