KI-Glossar
Das vollständige Wörterbuch der Künstlichen Intelligenz
Sigmoid Function
An S-shaped mathematical function that transforms any real value into a probability between 0 and 1, used as an activation function in logistic regression.
Logit Function
A logarithmic link function that converts probabilities to a logarithmic scale, defined as the natural logarithm of the odds of the probability of success.
Maximum Likelihood
A method for estimating model parameters that maximizes the probability of observing the training data given the model's parameters.
Bias (Intercept)
The constant term in the logistic regression equation that represents the baseline probability when all predictor variables are zero.
Weights (Coefficients)
Multiplicative parameters associated with each predictor variable that quantify their influence on the classification probability.
Decision Boundary
A hyperplane or surface that separates the different classes in the feature space, defined by the equation where the predicted probability equals 0.5.
Odds Ratio
A measure of association that quantifies how the odds of an outcome change when the predictor variable increases by one unit, with all other variables held constant.
L1 Regularization (Lasso)
A penalty technique that adds the sum of the absolute values of the coefficients to the cost function, favoring automatic feature selection.
L2 Regularization (Ridge)
A penalty method that adds the sum of the squared coefficients to the cost function, reducing the magnitude of the coefficients to prevent overfitting.
Area Under the Curve (AUC)
An evaluation metric that measures the probability that a model ranks a random positive instance higher than a random negative instance, ranging from 0.5 to 1.
Classification Threshold
A cut-off probability value (typically 0.5) used to convert continuous probabilities into binary class predictions.
Likelihood
A function that measures the probability of observing the data given the model parameters, used for parameter estimation in logistic regression.
Cost Function (Log Loss)
A logarithmic penalty function that measures the divergence between the predicted probabilities and the actual labels, used to optimize the model.
Convergence
A state where successive iterations of the optimization algorithm no longer significantly change the model's parameters, indicating that an optimum has been reached.
Imbalanced Classes
A situation where one class is significantly less represented than the other in the training data, requiring sampling or weighting techniques.