AI Glossary
The complete dictionary of Artificial Intelligence
Majority Vote Classifier (Hard Voting)
Aggregation method where the final prediction is the class receiving the most votes among a set of independent classifiers, with each model having equal weight.
Averaged Vote Classifier (Soft Voting)
Aggregation technique that calculates the average of predicted probabilities from each classifier for each class, with the class having the highest probability being chosen as the final prediction.
Classifier Weighting
Strategy involving assigning different weights to each classifier in a voting system, based on their individual performance or expertise on specific data subsets.
Model Heterogeneity
Fundamental principle of voting classifiers stating that the combined models should be of different types (e.g., decision tree, SVM, logistic regression) to reduce error correlation.
Prediction Aggregation
Process of combining outputs from multiple models into a single final prediction, at the core of voting classifiers' operation.
Ensemble Generalization Error
Error rate of the combined model on unseen data, often lower than the error of each individual classifier due to the smoothing effect of voting.
Confidence-Weighted Voting
Variant of soft voting where each classifier's weight is proportional to its confidence level (maximum probability) for its prediction, favoring the most certain predictions.
Condorcet Vote Classifier
Voting method where the winner is the classifier that would beat every other classifier in a series of pairwise comparisons across the dataset.
Aggregated Confidence Matrix
Matrix combining the confusion matrices or output probabilities of each classifier to evaluate overall performance and identify common weak points of the ensemble.
Parallel Training of Classifiers
Approach where each model in the ensemble is trained independently and simultaneously on the entire dataset, optimizing computation time for voting systems.
Aggregated Decision Boundary
Complex decision surface resulting from the combination of decision boundaries of each individual classifier, often more robust and less prone to overfitting.
Plurality Voting
Voting system where the predicted class is the one that receives the most votes, even if it does not achieve an absolute majority (more than 50%), unlike strict majority voting.
Bias-Variance Analysis in Ensemble
Study of how voting combines models with high bias and low variance with models with low bias and high variance to achieve an optimal trade-off.
Voting Meta-Classifier
Higher-level model that learns to combine the predictions of base classifiers, potentially going beyond simple voting by learning optimal weights or complex combination rules.
Voting Stability
Measure of the consistency of the ensemble's final prediction in response to small variations in input data or parameters of individual classifiers.
Dynamic Threshold Voting
Technique where the decision threshold for majority or weighted voting adjusts based on the distribution of probabilities or the difficulty of the sample to classify.
Ensemble Error Decomposition
Mathematical analysis separating the total ensemble error into bias error, variance error, and covariance between the errors of individual classifiers.