🏠 Hem
Benchmarkar
📊 Alla benchmarkar 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List-applikationer 🎨 Kreativa fria sidor 🎯 FSACB - Ultimata uppvisningen 🌍 Översättningsbenchmark
Modeller
🏆 Topp 10 modeller 🆓 Gratis modeller 📋 Alla modeller ⚙️ Kilo Code
Resurser
💬 Promptbibliotek 📖 AI-ordlista 🔗 Användbara länkar

AI-ordlista

Den kompletta ordlistan över AI

162
kategorier
2 032
underkategorier
23 060
termer
📖
termer

Data Drift

Change in the statistical distribution of input data in production compared to training data, which can degrade the model's performance. It is crucial to detect it to maintain the relevance of predictions.

📖
termer

Concept Drift

Evolution of the relationship between input variables and the target variable, where the meaning or context of the problem changes. This type of drift is more insidious because the input distributions can remain stable.

📖
termer

Performance Monitoring

Continuous tracking of model evaluation metrics (accuracy, recall, F1-score, etc.) on real data to identify any degradation. It allows for triggering alerts and retraining actions.

📖
termer

ML Dashboard

Centralized visualization interface aggregating key monitoring metrics, drift alerts, and the health status of models in production. It facilitates decision-making for MLOps teams.

📖
termer

Automated Alerting

Notification system triggered by predefined thresholds on performance metrics or drift indicators. It ensures rapid responsiveness to model behavioral anomalies.

📖
termer

Stability Metric

Indicator quantifying the similarity between the distribution of current data and the reference (training) data. Metrics like Kullback-Leibler Divergence or Population Stability Index are commonly used.

📖
termer

Feature Importance Analysis

Monitoring the evolution of the impact of each input variable on the model's predictions. A sudden change can indicate data drift or a change in the model's behavior.

📖
termer

Explainability in Production

Monitoring the explanations of predictions (e.g., SHAP, LIME) to ensure the model still uses the same logic and features. This is essential for trust and auditability of critical systems.

📖
termer

Prediction Anomaly Detection

Identification of outlier predictions or abnormally low confidence, which can signal model degradation or the presence of data outside its known distribution. It's a safety layer for automation.

📖
termer

Prediction Latency

Metric measuring the time elapsed between receiving a request and the model returning the prediction. Its monitoring is vital for real-time applications where high latency impacts the user experience.

📖
termer

Production Bias

Continuous monitoring of the model's fairness and bias indicators on real-world data to ensure it does not discriminate against certain populations. Monitoring is necessary because bias can emerge or be amplified with drift.

📖
termer

Structured Logging

Recording of inputs, predictions, metadata, and performance metrics in a structured format (e.g., JSON). This facilitates post-mortem analysis, debugging, and feeding monitoring pipelines.

📖
termer

Model Versioning

Tracking and managing different versions of a trained model, often via a Model Registry. Monitoring must be able to distinguish the performance of each deployed version.

📖
termer

Feedback Loop

Process of collecting feedback on the model's predictions (corrections, annotations) to feed future training cycles. Monitoring the quality and volume of this feedback is a system health indicator.

🔍

Inga resultat hittades