🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

Gradient Boosting

Ensemble learning technique that builds predictive models sequentially, where each new model corrects the errors of the previous ones by optimizing a loss function via gradient descent.

📖
termini

Additive Learning

Fundamental principle of Gradient Boosting where the final model is the weighted sum of predictions from multiple weak learners, each added to improve the overall performance.

📖
termini

Learning Rate

Hyperparameter that controls the influence of each weak learner on the final model, acting as a weighting factor to prevent overfitting.

📖
termini

Residuals

Prediction errors of the current model, calculated as the difference between observed values and predictions, on which the next weak learner is trained in Gradient Boosting.

📖
termini

Number of Estimators

Hyperparameter defining the number of weak learners (iterations) to build in the Gradient Boosting model, directly influencing complexity and performance.

📖
termini

XGBoost (Extreme Gradient Boosting)

Optimized and parallelized implementation of Gradient Boosting that incorporates regularization, handling of missing values, and tree pruning techniques for superior efficiency.

📖
termini

LightGBM

Gradient Boosting framework that uses a leaf-wise tree growth technique instead of level-wise, offering increased training speed and reduced memory consumption.

📖
termini

CatBoost

Gradient Boosting algorithm specialized in the efficient handling of categorical features, using ordered encoding techniques and asymmetrical boosting schemes.

📖
termini

Stochastic Gradient Boosting

Variant of Gradient Boosting where each weak learner is trained on a random subset of training data, reducing correlation between trees and improving generalization.

📖
termini

Feature Subsampling

Regularization technique in Gradient Boosting that involves considering only a random subset of predictive variables for each tree node split, limiting overfitting.

📖
termini

Maximum Tree Depth

Hyperparameter controlling the complexity of each weak learner by limiting the number of decision splits, balancing bias and variance in Gradient Boosting models.

📖
termini

Pseudo-Residuals

Generalization of residuals in Gradient Boosting, representing the negative gradient of the loss function with respect to current predictions, enabling optimization for various loss functions.

📖
termini

Regression Boosting

Application of Gradient Boosting to regression problems where the goal is to predict continuous values, typically using a squared or absolute loss function.

📖
termini

Classification Boosting

Application of Gradient Boosting to classification problems, using specific loss functions like log-loss (cross-entropy) to guide optimization of class probabilities.

📖
termini

L1/L2 Regularization

Penalization techniques added to the loss function in Gradient Boosting to control the complexity of tree leaf weights, reducing overfitting and improving robustness.

🔍

Nessun risultato trovato