🏠 Hem
Benchmarkar
📊 Alla benchmarkar 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List-applikationer 🎨 Kreativa fria sidor 🎯 FSACB - Ultimata uppvisningen 🌍 Översättningsbenchmark
Modeller
🏆 Topp 10 modeller 🆓 Gratis modeller 📋 Alla modeller ⚙️ Kilo Code
Resurser
💬 Promptbibliotek 📖 AI-ordlista 🔗 Användbara länkar

AI-ordlista

Den kompletta ordlistan över AI

162
kategorier
2 032
underkategorier
23 060
termer
📖
termer

Precision@K

Metric measuring the proportion of relevant items among the top K recommendations, essential for evaluating the quality of top-ranked results.

📖
termer

Recall@K

Indicator calculating the ratio of relevant items actually present in the top K recommendations compared to the total available relevant items.

📖
termer

Mean Average Precision (MAP)

Aggregated metric calculating the average of precisions at each relevant position, weighted by the rank of each relevant item in the recommendation list.

📖
termer

NDCG (Normalized Discounted Cumulative Gain)

Normalized score evaluating ranking quality by penalizing relevant items placed far from the top of the list, ideal for recommendations with graded relevance.

📖
termer

RMSE (Root Mean Square Error)

Root mean square error used to evaluate rating prediction accuracy by measuring the difference between predicted and actual values.

📖
termer

Hit Rate (HR)

Percentage of sessions where at least one relevant item appears in the top N recommendations, measuring the overall effectiveness of the system.

📖
termer

Catalog Coverage

Percentage of unique catalog items that can be recommended by the system, crucial to avoid concentration on a limited subset of items.

📖
termer

Intra-List Diversity

Measure of average dissimilarity between items in the same recommendation list, essential to avoid redundancy and enhance user experience.

📖
termer

Novelty

Degree of unknown of recommended items for the user, calculated as the inverse of their global popularity in the catalog.

📖
termer

Serendipity

Ability of the system to recommend relevant but unexpected items that positively surprise the user beyond simple predictions.

📖
termer

A/B Testing

Experimental methodology comparing the performance of two versions of the system on real user segments to measure business impact.

📖
termer

Leave-One-Out Cross-Validation

Robust evaluation technique where each user interaction is alternately used as test data while others serve for training.

📖
termer

Offline vs Online Evaluation

Dual approach evaluating performance on historical data (offline) and with real interactions (online) to validate the complete effectiveness of the system.

📖
termer

Temporal Generalization

Ability of the system to maintain its performance on future data, evaluated sequentially on temporal splits rather than random ones.

📖
termer

Business Metrics Correlation

Analysis of the relationship between algorithmic metrics (NDCG, Precision) and business indicators (conversion, retention) to validate business relevance.

📖
termer

Cataract Metric

Composite score balancing precision, diversity, novelty, and coverage to holistically evaluate the overall quality of recommendations.

📖
termer

Expected Reciprocal Rank (ERR)

Probabilistic model based on user behavior assuming cessation of examination after the first click, heavily weighting the first positions.

📖
termer

User Coverage

Percentage of users for whom the system can generate recommendations, critical for measuring the universal applicability of the system.

📖
termer

Fairness Metrics

Indicators evaluating the equity of recommendation distribution among different demographic groups to avoid algorithmic biases.

📖
termer

Exposure Bias Measurement

Quantification of the exposure disparity between popular and long-tail items, essential for evaluating recommendation balance.

🔍

Inga resultat hittades