🏠 Home
Prestatietests
📊 Alle benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List applicaties 🎨 Creatieve vrije pagina's 🎯 FSACB - Ultieme showcase 🌍 Vertaalbenchmark
Modellen
🏆 Top 10 modellen 🆓 Gratis modellen 📋 Alle modellen ⚙️ Kilo Code
Bronnen
💬 Promptbibliotheek 📖 AI-woordenlijst 🔗 Nuttige links

AI-woordenlijst

Het complete woordenboek van kunstmatige intelligentie

162
categorieën
2.032
subcategorieën
23.060
termen
📖
termen

Data Drift

Change in the statistical distribution of input data in production compared to training data, which can degrade the model's performance. It is crucial to detect it to maintain the relevance of predictions.

📖
termen

Concept Drift

Evolution of the relationship between input variables and the target variable, where the meaning or context of the problem changes. This type of drift is more insidious because the input distributions can remain stable.

📖
termen

Performance Monitoring

Continuous tracking of model evaluation metrics (accuracy, recall, F1-score, etc.) on real data to identify any degradation. It allows for triggering alerts and retraining actions.

📖
termen

ML Dashboard

Centralized visualization interface aggregating key monitoring metrics, drift alerts, and the health status of models in production. It facilitates decision-making for MLOps teams.

📖
termen

Automated Alerting

Notification system triggered by predefined thresholds on performance metrics or drift indicators. It ensures rapid responsiveness to model behavioral anomalies.

📖
termen

Stability Metric

Indicator quantifying the similarity between the distribution of current data and the reference (training) data. Metrics like Kullback-Leibler Divergence or Population Stability Index are commonly used.

📖
termen

Feature Importance Analysis

Monitoring the evolution of the impact of each input variable on the model's predictions. A sudden change can indicate data drift or a change in the model's behavior.

📖
termen

Explainability in Production

Monitoring the explanations of predictions (e.g., SHAP, LIME) to ensure the model still uses the same logic and features. This is essential for trust and auditability of critical systems.

📖
termen

Prediction Anomaly Detection

Identification of outlier predictions or abnormally low confidence, which can signal model degradation or the presence of data outside its known distribution. It's a safety layer for automation.

📖
termen

Prediction Latency

Metric measuring the time elapsed between receiving a request and the model returning the prediction. Its monitoring is vital for real-time applications where high latency impacts the user experience.

📖
termen

Production Bias

Continuous monitoring of the model's fairness and bias indicators on real-world data to ensure it does not discriminate against certain populations. Monitoring is necessary because bias can emerge or be amplified with drift.

📖
termen

Structured Logging

Recording of inputs, predictions, metadata, and performance metrics in a structured format (e.g., JSON). This facilitates post-mortem analysis, debugging, and feeding monitoring pipelines.

📖
termen

Model Versioning

Tracking and managing different versions of a trained model, often via a Model Registry. Monitoring must be able to distinguish the performance of each deployed version.

📖
termen

Feedback Loop

Process of collecting feedback on the model's predictions (corrections, annotations) to feed future training cycles. Monitoring the quality and volume of this feedback is a system health indicator.

🔍

Geen resultaten gevonden