🏠 Startseite
Vergleiche
📊 Alle Benchmarks 🦖 Dinosaurier v1 🦖 Dinosaurier v2 ✅ To-Do-Listen-Apps 🎨 Kreative freie Seiten 🎯 FSACB - Ultimatives Showcase 🌍 Übersetzungs-Benchmark
Modelle
🏆 Top 10 Modelle 🆓 Kostenlose Modelle 📋 Alle Modelle ⚙️ Kilo Code
Ressourcen
💬 Prompt-Bibliothek 📖 KI-Glossar 🔗 Nützliche Links

KI-Glossar

Das vollständige Wörterbuch der Künstlichen Intelligenz

162
Kategorien
2.032
Unterkategorien
23.060
Begriffe
📖
Begriffe

Precision

Metric evaluating the proportion of correct positive predictions among all positive predictions made by the model. It measures the quality of positive predictions by minimizing false positives.

📖
Begriffe

Recall

Indicator measuring the model's ability to correctly identify all actual positive instances in the dataset. Recall quantifies the completeness of positive predictions by reducing false negatives.

📖
Begriffe

F1-Score

Harmonic mean of precision and recall, providing a balance between these two often opposing metrics. The F1-Score is particularly useful for imbalanced classes where precision alone or recall alone would be misleading.

📖
Begriffe

F-beta Score

Generalization of the F1-Score allowing different weighting of precision and recall based on the beta parameter. This metric adapts to specific domain needs by favoring either precision (beta<1) or recall (beta>1).

📖
Begriffe

Confusion Matrix

Contingency table summarizing the performance of a classification model by comparing predictions to actual values. It breaks down results into true positives, false positives, true negatives, and false negatives for detailed analysis.

📖
Begriffe

True Positive

Instance correctly classified as positive by the model, representing a successful prediction for the target class. True positives constitute the numerator in recall calculation and contribute to the denominator of precision.

📖
Begriffe

False Positive

Negative instance incorrectly predicted as positive by the model, corresponding to a type I error. False positives negatively affect precision but do not directly influence the recall of the positive class.

📖
Begriffe

True Negative

Instance correctly identified as negative by the model, demonstrating its ability to reject irrelevant cases. True negatives are essential for evaluating the model's specificity on majority classes.

📖
Begriffe

False Negative

Positive instance missed by the model and incorrectly classified as negative, representing a type II error. False negatives directly reduce recall and can have critical consequences depending on the application domain.

📖
Begriffe

Average Precision

Metric summarizing the precision-recall curve by calculating the weighted average of precisions at each recall threshold. It is particularly suitable for evaluating object detection and information retrieval systems.

📖
Begriffe

AUC-PR Score

Area under the precision-recall curve, measuring the overall model performance regardless of the classification threshold. Unlike AUC-ROC, AUC-PR is more sensitive to performance on minority classes.

📖
Begriffe

Sensitivity

Synonym for recall, measuring the proportion of actual positive cases correctly identified by the model. Sensitivity is crucial in medical applications where detecting positive cases is a priority.

📖
Begriffe

Specificity

Model's ability to correctly identify negative instances, calculated as the ratio of true negatives to the total of actual negatives. Specificity complements recall to evaluate performance across all classes.

📖
Begriffe

Balanced Accuracy

Arithmetic mean of sensitivity and specificity, correcting biases in imbalanced datasets. This metric treats performance on majority and minority classes equally.

📖
Begriffe

Precision-Recall Curve

Two-dimensional graph plotting precision as a function of recall for different classification thresholds. This visualization allows analysis of the trade-off between precision and recall and selection of the optimal threshold.

📖
Begriffe

Jaccard Score

Metric measuring the similarity between two sets as the ratio of their intersection to their union. In classification, the Jaccard score evaluates the agreement between the sets of predicted positive instances and actual positive instances.

📖
Begriffe

Top-K Precision

Metric evaluating the proportion of correct predictions among the K most confident predictions of the model. It is particularly relevant for recommendation systems where only the top suggestions are presented to the user.

📖
Begriffe

Top-K Recall

Extension of traditional recall limited to the K most probable predictions of the model. This metric measures the system's ability to cover relevant instances by considering only a subset of the total predictions.

🔍

Keine Ergebnisse gefunden