🏠 Strona Główna
Benchmarki
📊 Wszystkie benchmarki 🦖 Dinozaur v1 🦖 Dinozaur v2 ✅ Aplikacje To-Do List 🎨 Kreatywne wolne strony 🎯 FSACB - Ostateczny pokaz 🌍 Benchmark tłumaczeń
Modele
🏆 Top 10 modeli 🆓 Darmowe modele 📋 Wszystkie modele ⚙️ Kilo Code
Zasoby
💬 Biblioteka promptów 📖 Słownik AI 🔗 Przydatne linki

Słownik AI

Kompletny słownik sztucznej inteligencji

162
kategorie
2 032
podkategorie
23 060
pojęcia
📖
pojęcia

Precision

Metric evaluating the proportion of correct positive predictions among all positive predictions made by the model. It measures the quality of positive predictions by minimizing false positives.

📖
pojęcia

Recall

Indicator measuring the model's ability to correctly identify all actual positive instances in the dataset. Recall quantifies the completeness of positive predictions by reducing false negatives.

📖
pojęcia

F1-Score

Harmonic mean of precision and recall, providing a balance between these two often opposing metrics. The F1-Score is particularly useful for imbalanced classes where precision alone or recall alone would be misleading.

📖
pojęcia

F-beta Score

Generalization of the F1-Score allowing different weighting of precision and recall based on the beta parameter. This metric adapts to specific domain needs by favoring either precision (beta<1) or recall (beta>1).

📖
pojęcia

Confusion Matrix

Contingency table summarizing the performance of a classification model by comparing predictions to actual values. It breaks down results into true positives, false positives, true negatives, and false negatives for detailed analysis.

📖
pojęcia

True Positive

Instance correctly classified as positive by the model, representing a successful prediction for the target class. True positives constitute the numerator in recall calculation and contribute to the denominator of precision.

📖
pojęcia

False Positive

Negative instance incorrectly predicted as positive by the model, corresponding to a type I error. False positives negatively affect precision but do not directly influence the recall of the positive class.

📖
pojęcia

True Negative

Instance correctly identified as negative by the model, demonstrating its ability to reject irrelevant cases. True negatives are essential for evaluating the model's specificity on majority classes.

📖
pojęcia

False Negative

Positive instance missed by the model and incorrectly classified as negative, representing a type II error. False negatives directly reduce recall and can have critical consequences depending on the application domain.

📖
pojęcia

Average Precision

Metric summarizing the precision-recall curve by calculating the weighted average of precisions at each recall threshold. It is particularly suitable for evaluating object detection and information retrieval systems.

📖
pojęcia

AUC-PR Score

Area under the precision-recall curve, measuring the overall model performance regardless of the classification threshold. Unlike AUC-ROC, AUC-PR is more sensitive to performance on minority classes.

📖
pojęcia

Sensitivity

Synonym for recall, measuring the proportion of actual positive cases correctly identified by the model. Sensitivity is crucial in medical applications where detecting positive cases is a priority.

📖
pojęcia

Specificity

Model's ability to correctly identify negative instances, calculated as the ratio of true negatives to the total of actual negatives. Specificity complements recall to evaluate performance across all classes.

📖
pojęcia

Balanced Accuracy

Arithmetic mean of sensitivity and specificity, correcting biases in imbalanced datasets. This metric treats performance on majority and minority classes equally.

📖
pojęcia

Precision-Recall Curve

Two-dimensional graph plotting precision as a function of recall for different classification thresholds. This visualization allows analysis of the trade-off between precision and recall and selection of the optimal threshold.

📖
pojęcia

Jaccard Score

Metric measuring the similarity between two sets as the ratio of their intersection to their union. In classification, the Jaccard score evaluates the agreement between the sets of predicted positive instances and actual positive instances.

📖
pojęcia

Top-K Precision

Metric evaluating the proportion of correct predictions among the K most confident predictions of the model. It is particularly relevant for recommendation systems where only the top suggestions are presented to the user.

📖
pojęcia

Top-K Recall

Extension of traditional recall limited to the K most probable predictions of the model. This metric measures the system's ability to cover relevant instances by considering only a subset of the total predictions.

🔍

Nie znaleziono wyników