🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Attention Scaling

Normalization technique for attention scores by dividing by the square root of dimensionality to maintain constant variance and stabilize the training of Transformer models.

📖
terms

Dimensional Scaling Factor

Coefficient √dk used to normalize attention scores, where dk represents the dimensionality of query and key vectors in the Transformer architecture.

📖
terms

Gradient Stabilization

Process aimed at keeping gradients within a stable numerical range during backpropagation, essential for preventing training issues in deep networks.

📖
terms

Attention Score Normalization

Normalization of similarity scores before applying Softmax to control the probability distribution and prevent extreme attention concentrations.

📖
terms

Query-Key Dimensionality

Common dimension of query and key vectors in multi-head attention, whose square root determines the normalization scaling factor.

📖
terms

Attention Variance Control

Maintenance of constant variance of attention scores across different layers to ensure optimal numerical stability of the model.

📖
terms

Numerical Stability in Attention

Set of techniques ensuring that attention calculations remain within manageable numerical ranges, preventing floating-point overflows and underflows.

📖
terms

Score Distribution Sharpening

Phenomenon where attention distributions become too concentrated without proper normalization, leading to suboptimal model behavior.

📖
terms

Multi-Head Attention Scaling

Application of the √dk scaling factor independently to each attention head in the multi-head architecture to maintain consistency across parallel representations.

📖
terms

Embedding Dimension Normalization

Normalization technique based on embedding dimensionality to ensure comparable magnitude of vector representations in the attention space.

📖
terms

Attention Temperature Scaling

Dynamic adjustment of the scaling factor to modulate attention concentration, enabling fine-grained control over attention weight distribution.

📖
terms

Gradient Flow Optimization

Optimization of gradient pathways through attention layers to maintain effective learning in deep networks.

📖
terms

Score Magnitude Regularization

Control of attention score magnitude through normalization to prevent numerical instabilities and improve model convergence.

📖
terms

Attention Entropy Preservation

Maintenance of appropriate entropy levels in attention distributions through normalization, preventing overly sharp or overly uniform distributions.

🔍

No results found