🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

Precision@K

Metric measuring the proportion of relevant items among the top K recommendations, essential for evaluating the quality of top-ranked results.

📖
termini

Recall@K

Indicator calculating the ratio of relevant items actually present in the top K recommendations compared to the total available relevant items.

📖
termini

Mean Average Precision (MAP)

Aggregated metric calculating the average of precisions at each relevant position, weighted by the rank of each relevant item in the recommendation list.

📖
termini

NDCG (Normalized Discounted Cumulative Gain)

Normalized score evaluating ranking quality by penalizing relevant items placed far from the top of the list, ideal for recommendations with graded relevance.

📖
termini

RMSE (Root Mean Square Error)

Root mean square error used to evaluate rating prediction accuracy by measuring the difference between predicted and actual values.

📖
termini

Hit Rate (HR)

Percentage of sessions where at least one relevant item appears in the top N recommendations, measuring the overall effectiveness of the system.

📖
termini

Catalog Coverage

Percentage of unique catalog items that can be recommended by the system, crucial to avoid concentration on a limited subset of items.

📖
termini

Intra-List Diversity

Measure of average dissimilarity between items in the same recommendation list, essential to avoid redundancy and enhance user experience.

📖
termini

Novelty

Degree of unknown of recommended items for the user, calculated as the inverse of their global popularity in the catalog.

📖
termini

Serendipity

Ability of the system to recommend relevant but unexpected items that positively surprise the user beyond simple predictions.

📖
termini

A/B Testing

Experimental methodology comparing the performance of two versions of the system on real user segments to measure business impact.

📖
termini

Leave-One-Out Cross-Validation

Robust evaluation technique where each user interaction is alternately used as test data while others serve for training.

📖
termini

Offline vs Online Evaluation

Dual approach evaluating performance on historical data (offline) and with real interactions (online) to validate the complete effectiveness of the system.

📖
termini

Temporal Generalization

Ability of the system to maintain its performance on future data, evaluated sequentially on temporal splits rather than random ones.

📖
termini

Business Metrics Correlation

Analysis of the relationship between algorithmic metrics (NDCG, Precision) and business indicators (conversion, retention) to validate business relevance.

📖
termini

Cataract Metric

Composite score balancing precision, diversity, novelty, and coverage to holistically evaluate the overall quality of recommendations.

📖
termini

Expected Reciprocal Rank (ERR)

Probabilistic model based on user behavior assuming cessation of examination after the first click, heavily weighting the first positions.

📖
termini

User Coverage

Percentage of users for whom the system can generate recommendations, critical for measuring the universal applicability of the system.

📖
termini

Fairness Metrics

Indicators evaluating the equity of recommendation distribution among different demographic groups to avoid algorithmic biases.

📖
termini

Exposure Bias Measurement

Quantification of the exposure disparity between popular and long-tail items, essential for evaluating recommendation balance.

🔍

Nessun risultato trovato