🏠 Startseite
Vergleiche
📊 Alle Benchmarks 🦖 Dinosaurier v1 🦖 Dinosaurier v2 ✅ To-Do-Listen-Apps 🎨 Kreative freie Seiten 🎯 FSACB - Ultimatives Showcase 🌍 Übersetzungs-Benchmark
Modelle
🏆 Top 10 Modelle 🆓 Kostenlose Modelle 📋 Alle Modelle ⚙️ Kilo Code
Ressourcen
💬 Prompt-Bibliothek 📖 KI-Glossar 🔗 Nützliche Links

KI-Glossar

Das vollständige Wörterbuch der Künstlichen Intelligenz

162
Kategorien
2.032
Unterkategorien
23.060
Begriffe
📖
Begriffe

Nesterov Momentum

Variant of the momentum algorithm that applies a lookahead correction by calculating the gradient at the estimated future position, accelerating convergence and reducing oscillations.

📖
Begriffe

Adam (Adaptive Moment Estimation)

Optimization algorithm combining the ideas of Momentum and RMSprop, using estimates of the first and second moments of gradients to adapt the learning rates of each parameter.

📖
Begriffe

AdaGrad

Adaptive optimizer that adjusts the learning rate of each parameter based on the historical sum of squared gradients, favoring infrequent parameters.

📖
Begriffe

AdaDelta

Extension of AdaGrad that limits the accumulation window of past gradients to a fixed size via a moving average, avoiding the aggressive decay of the learning rate.

📖
Begriffe

Learning Rate Decay

Strategy for progressively reducing the learning rate during training, often according to a predefined schedule (step, exponential, or cosine), to fine-tune convergence towards a minimum.

📖
Begriffe

LAMB Optimizer (Layer-wise Adaptive Moments)

Optimization algorithm designed for large-scale training, adapting the learning rate per layer using the norm of weights and gradients, effective for very large batch sizes.

📖
Begriffe

LARS Optimizer (Layer-wise Adaptive Rate Scaling)

Optimization method that adapts the learning rate for each layer based on the ratio between the norm of weights and the norm of gradients, particularly suitable for training with large batches.

📖
Begriffe

Lookahead Optimizer

Optimization mechanism that periodically updates the 'slow' weights towards the average of 'fast' weights generated by an internal optimizer, improving generalization and convergence stability.

📖
Begriffe

RAdam (Rectified Adam)

A variant of Adam that corrects the variance of the learning rate adaptation in the early stages of training, offering more stable convergence without requiring a warmup phase.

📖
Begriffe

SWATS (Switching from Adam to SGD)

A strategy that starts training with an adaptive optimizer like Adam for fast convergence, then switches to Stochastic Gradient Descent (SGD) for better generalization.

📖
Begriffe

Yogi Optimizer

A modification of Adam aimed at providing more stable convergence by using a less aggressive second-moment update, reducing oscillations and improving performance on complex tasks.

📖
Begriffe

Shampoo

A second-order optimizer that preconditions gradients using blockwise approximations of the Hessian matrix, accelerating convergence for ill-conditioned problems.

📖
Begriffe

Learning Rate Restart

A cyclical technique where the learning rate is periodically reset to its initial value, allowing the model to escape local minima and explore new regions of the solution space.

🔍

Keine Ergebnisse gefunden