Glossario IA
Il dizionario completo dell'Intelligenza Artificiale
Nesterov Momentum
Variant of the momentum algorithm that applies a lookahead correction by calculating the gradient at the estimated future position, accelerating convergence and reducing oscillations.
Adam (Adaptive Moment Estimation)
Optimization algorithm combining the ideas of Momentum and RMSprop, using estimates of the first and second moments of gradients to adapt the learning rates of each parameter.
AdaGrad
Adaptive optimizer that adjusts the learning rate of each parameter based on the historical sum of squared gradients, favoring infrequent parameters.
AdaDelta
Extension of AdaGrad that limits the accumulation window of past gradients to a fixed size via a moving average, avoiding the aggressive decay of the learning rate.
Learning Rate Decay
Strategy for progressively reducing the learning rate during training, often according to a predefined schedule (step, exponential, or cosine), to fine-tune convergence towards a minimum.
LAMB Optimizer (Layer-wise Adaptive Moments)
Optimization algorithm designed for large-scale training, adapting the learning rate per layer using the norm of weights and gradients, effective for very large batch sizes.
LARS Optimizer (Layer-wise Adaptive Rate Scaling)
Optimization method that adapts the learning rate for each layer based on the ratio between the norm of weights and the norm of gradients, particularly suitable for training with large batches.
Lookahead Optimizer
Optimization mechanism that periodically updates the 'slow' weights towards the average of 'fast' weights generated by an internal optimizer, improving generalization and convergence stability.
RAdam (Rectified Adam)
A variant of Adam that corrects the variance of the learning rate adaptation in the early stages of training, offering more stable convergence without requiring a warmup phase.
SWATS (Switching from Adam to SGD)
A strategy that starts training with an adaptive optimizer like Adam for fast convergence, then switches to Stochastic Gradient Descent (SGD) for better generalization.
Yogi Optimizer
A modification of Adam aimed at providing more stable convergence by using a less aggressive second-moment update, reducing oscillations and improving performance on complex tasks.
Shampoo
A second-order optimizer that preconditions gradients using blockwise approximations of the Hessian matrix, accelerating convergence for ill-conditioned problems.
Learning Rate Restart
A cyclical technique where the learning rate is periodically reset to its initial value, allowing the model to escape local minima and explore new regions of the solution space.