🏠 Startseite
Vergleiche
📊 Alle Benchmarks 🦖 Dinosaurier v1 🦖 Dinosaurier v2 ✅ To-Do-Listen-Apps 🎨 Kreative freie Seiten 🎯 FSACB - Ultimatives Showcase 🌍 Übersetzungs-Benchmark
Modelle
🏆 Top 10 Modelle 🆓 Kostenlose Modelle 📋 Alle Modelle ⚙️ Kilo Code
Ressourcen
💬 Prompt-Bibliothek 📖 KI-Glossar 🔗 Nützliche Links

KI-Glossar

Das vollständige Wörterbuch der Künstlichen Intelligenz

162
Kategorien
2.032
Unterkategorien
23.060
Begriffe
📖
Begriffe

Gradient Boosting

Ensemble learning technique that builds predictive models sequentially, where each new model corrects the errors of the previous ones by optimizing a loss function via gradient descent.

📖
Begriffe

Additive Learning

Fundamental principle of Gradient Boosting where the final model is the weighted sum of predictions from multiple weak learners, each added to improve the overall performance.

📖
Begriffe

Learning Rate

Hyperparameter that controls the influence of each weak learner on the final model, acting as a weighting factor to prevent overfitting.

📖
Begriffe

Residuals

Prediction errors of the current model, calculated as the difference between observed values and predictions, on which the next weak learner is trained in Gradient Boosting.

📖
Begriffe

Number of Estimators

Hyperparameter defining the number of weak learners (iterations) to build in the Gradient Boosting model, directly influencing complexity and performance.

📖
Begriffe

XGBoost (Extreme Gradient Boosting)

Optimized and parallelized implementation of Gradient Boosting that incorporates regularization, handling of missing values, and tree pruning techniques for superior efficiency.

📖
Begriffe

LightGBM

Gradient Boosting framework that uses a leaf-wise tree growth technique instead of level-wise, offering increased training speed and reduced memory consumption.

📖
Begriffe

CatBoost

Gradient Boosting algorithm specialized in the efficient handling of categorical features, using ordered encoding techniques and asymmetrical boosting schemes.

📖
Begriffe

Stochastic Gradient Boosting

Variant of Gradient Boosting where each weak learner is trained on a random subset of training data, reducing correlation between trees and improving generalization.

📖
Begriffe

Feature Subsampling

Regularization technique in Gradient Boosting that involves considering only a random subset of predictive variables for each tree node split, limiting overfitting.

📖
Begriffe

Maximum Tree Depth

Hyperparameter controlling the complexity of each weak learner by limiting the number of decision splits, balancing bias and variance in Gradient Boosting models.

📖
Begriffe

Pseudo-Residuals

Generalization of residuals in Gradient Boosting, representing the negative gradient of the loss function with respect to current predictions, enabling optimization for various loss functions.

📖
Begriffe

Regression Boosting

Application of Gradient Boosting to regression problems where the goal is to predict continuous values, typically using a squared or absolute loss function.

📖
Begriffe

Classification Boosting

Application of Gradient Boosting to classification problems, using specific loss functions like log-loss (cross-entropy) to guide optimization of class probabilities.

📖
Begriffe

L1/L2 Regularization

Penalization techniques added to the loss function in Gradient Boosting to control the complexity of tree leaf weights, reducing overfitting and improving robustness.

🔍

Keine Ergebnisse gefunden