🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Majority Vote Classifier (Hard Voting)

Aggregation method where the final prediction is the class receiving the most votes among a set of independent classifiers, with each model having equal weight.

📖
terms

Averaged Vote Classifier (Soft Voting)

Aggregation technique that calculates the average of predicted probabilities from each classifier for each class, with the class having the highest probability being chosen as the final prediction.

📖
terms

Classifier Weighting

Strategy involving assigning different weights to each classifier in a voting system, based on their individual performance or expertise on specific data subsets.

📖
terms

Model Heterogeneity

Fundamental principle of voting classifiers stating that the combined models should be of different types (e.g., decision tree, SVM, logistic regression) to reduce error correlation.

📖
terms

Prediction Aggregation

Process of combining outputs from multiple models into a single final prediction, at the core of voting classifiers' operation.

📖
terms

Ensemble Generalization Error

Error rate of the combined model on unseen data, often lower than the error of each individual classifier due to the smoothing effect of voting.

📖
terms

Confidence-Weighted Voting

Variant of soft voting where each classifier's weight is proportional to its confidence level (maximum probability) for its prediction, favoring the most certain predictions.

📖
terms

Condorcet Vote Classifier

Voting method where the winner is the classifier that would beat every other classifier in a series of pairwise comparisons across the dataset.

📖
terms

Aggregated Confidence Matrix

Matrix combining the confusion matrices or output probabilities of each classifier to evaluate overall performance and identify common weak points of the ensemble.

📖
terms

Parallel Training of Classifiers

Approach where each model in the ensemble is trained independently and simultaneously on the entire dataset, optimizing computation time for voting systems.

📖
terms

Aggregated Decision Boundary

Complex decision surface resulting from the combination of decision boundaries of each individual classifier, often more robust and less prone to overfitting.

📖
terms

Plurality Voting

Voting system where the predicted class is the one that receives the most votes, even if it does not achieve an absolute majority (more than 50%), unlike strict majority voting.

📖
terms

Bias-Variance Analysis in Ensemble

Study of how voting combines models with high bias and low variance with models with low bias and high variance to achieve an optimal trade-off.

📖
terms

Voting Meta-Classifier

Higher-level model that learns to combine the predictions of base classifiers, potentially going beyond simple voting by learning optimal weights or complex combination rules.

📖
terms

Voting Stability

Measure of the consistency of the ensemble's final prediction in response to small variations in input data or parameters of individual classifiers.

📖
terms

Dynamic Threshold Voting

Technique where the decision threshold for majority or weighted voting adjusts based on the distribution of probabilities or the difficulty of the sample to classify.

📖
terms

Ensemble Error Decomposition

Mathematical analysis separating the total ensemble error into bias error, variance error, and covariance between the errors of individual classifiers.

🔍

No results found