AI-ordlista
Den kompletta ordlistan över AI
C4.5
Supervised learning algorithm developed by Quinlan in 1993, an extension of ID3 capable of handling continuous attributes and missing data, using gain ratio as the splitting criterion.
C5.0
Improved version of C4.5 developed by Quinlan, offering superior performance, more efficient handling of large datasets, and the ability to generate ensembles of trees (boosting).
Gain ratio
Splitting criterion used in C4.5 to correct the bias of information gain towards attributes with many values, calculated as the information gain divided by the intrinsic entropy of the attribute.
Intrinsic entropy
Measure used in the calculation of gain ratio to penalize attributes with a large number of values, representing the amount of potential information contained in the distribution of an attribute's values.
Binary discretization
Technique used by C4.5 to transform continuous attributes into binary categorical attributes by identifying the optimal splitting point that maximizes information gain.
Missing value handling
C4.5's ability to handle instances with missing attributes using probabilistic weighting methods or by fractionally distributing the instance across possible branches.
Pessimistic pruning
Complexity reduction method in C4.5 that eliminates non-essential branches using a pessimistic statistical estimate of error based on the binomial distribution.
C5.0 Boosting
Ensemble learning technique implemented in C5.0 that combines multiple weak decision trees to create a strong classifier, significantly improving prediction accuracy.
Optimal cut point
Threshold value determined by C4.5 to split a continuous attribute into two intervals, selected to maximize the information gain of the resulting split.
Normalized information gain
Variant of information gain used in some contexts to avoid bias, similar to gain ratio but with a slightly different mathematical approach to normalization.
C4.5 decision tree
Hierarchical structure produced by the C4.5 algorithm where each internal node represents a test on an attribute, each branch represents a test outcome, and each leaf represents a class label.
C5.0 sliding window
Optimization in C5.0 to efficiently process large datasets using a window of samples that moves through the complete dataset during tree construction.
Confidence factor
Parameter in C4.5 (typically 25%) used in error estimation for pruning, controlling the level of pessimism in evaluating tree branch performance.
C4.5 IF-THEN rules
Alternative representation of decision trees generated by C4.5 where each path from root to leaf is converted into a conditional classification rule.
C4.5 computational complexity
Algorithmic cost of C4.5 on the order of O(n * m * log n) where n is the number of instances and m is the number of attributes, optimized by sorting and incremental computation techniques.
Multi-way split
Ability of C4.5 to create nodes with more than two branches for categorical attributes, unlike other algorithms that limit themselves to binary splits.