AI-ordlista
Den kompletta ordlistan över AI
Harmonic Mean
Mathematical average that penalizes extreme values, used in the F1-Score to balance precision and recall.
True Positives (TP)
Instances correctly classified as positive by the model, fundamental element for calculating classification metrics.
False Positives (FP)
Negative instances incorrectly predicted as positive, directly impacting precision and potentially costly depending on the context.
F-Beta Score
Generalization of the F1-Score with beta parameter adjusting the relative importance between precision and recall according to business needs.
F1-Score Macro
Arithmetic mean of F1-Scores calculated independently for each class, treating all classes with equal weight.
F1-Score Micro
F1-Score calculated globally by aggregating contributions from all classes, equivalent to accuracy for multi-class classification.
F1-Score Weighted
Weighted average of F1-Scores per class according to their support, suitable for datasets with significant class imbalance.
AUC-PR
Area under the Precision-Recall curve, a more informative metric than AUC-ROC for highly imbalanced classes.
MCC (Matthews Correlation Coefficient)
Correlation coefficient between observations and binary predictions, a balanced single metric even with class imbalance.
F2-Score
Variant of the F-Score with beta=2, giving twice as much weight to recall as to precision, useful when false negatives are critical.
F0.5-Score
Variant of the F-Score with beta=0.5, favoring precision over recall, suitable when false positives are particularly costly.