🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Out-of-Bag Score

Performance metric derived from the out-of-bag error, often expressed as 1 minus the OOB error, providing an internal evaluation of model quality without cross-validation.

📖
terms

OOB Estimate

Unbiased estimate of the test error obtained by aggregating predictions on out-of-bag samples for each observation in the training set.

📖
terms

Bagging Error

Generalization error of a bagging model, which can be efficiently estimated by the out-of-bag method without requiring an external validation set.

📖
terms

Random Forest OOB

Specific application of out-of-bag error to random forests, where each tree is evaluated on samples not used in its bootstrap to estimate overall performance.

📖
terms

OOB Variable Importance

Measure of variable importance calculated by evaluating the increase in OOB error when the values of a variable are randomly permuted in out-of-bag samples.

📖
terms

OOB Cross-Validation

Alternative to traditional cross-validation using out-of-bag samples as internal validation sets for each bootstrap iteration.

📖
terms

Bagging Variance Reduction

Fundamental property of bagging that reduces prediction variance by averaging the outputs of models trained on different bootstrap samples.

📖
terms

OOB Confidence Interval

Confidence interval for the generalization error estimated from the distribution of out-of-bag errors across different bootstrap samples.

📖
terms

Subagging OOB

Bagging variant using subsamples without replacement, where the out-of-bag estimation must be adapted to account for the different sampling strategy.

📖
terms

OOB Proximity Matrix

Matrix measuring the proximity between observations based on the frequency where they fall into the same terminal leaves of trees evaluated on out-of-bag samples.

📖
terms

Bagging Instability

Measure of the sensitivity of a base algorithm to variations in training data, a necessary condition for bagging and OOB estimation to be effective.

📖
terms

OOB Learning Curve

Curve showing the evolution of the out-of-bag error as a function of the number of models in the ensemble, allowing optimization of the ensemble size without overfitting.

🔍

No results found