🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Sparse Transformer

Variant using predictive sparse attention patterns to reduce computational connections while capturing long-range dependencies. The architecture factorizes attention into subsets to optimize processing.

📖
terms

Compressive Transformer

Extension of Transformer-XL that compresses old hidden memories into denser vectors to preserve long-term history. This compression enables efficient storage of extensive contextual information.

📖
terms

Universal Transformer

Adaptive architecture where depth is dynamically determined by an adaptive halting mechanism rather than fixed. Universal Transformer iteratively applies shared-weight transformations with adaptive attention.

📖
terms

Set Transformer

Permutation-invariant architecture based on attention to process data sets without predefined order. Set Transformer uses induced attention blocks and pooling mechanisms for set operations.

📖
terms

Synthesizer

Variant where attention weights are learned directly from position embeddings or generated by small networks, without depending on token content. This approach eliminates the need for QK similarity computations.

📖
terms

Linear Transformer

Architecture using kernelized decomposition of attention to achieve linear complexity in sequence and memory. Linear Transformer replaces softmax with positive kernel functions to enable associative reordering.

📖
terms

Local Attention

Attention mechanism restricted to local neighborhoods around each position, drastically reducing the number of token pairs to consider. This approach is particularly effective for data with strong local structure.

📖
terms

Dilated Attention

Extension of sliding window attention using dilated patterns to capture longer-range dependencies without increasing complexity. The gaps in the pattern allow exponential expansion of the receptive field.

📖
terms

Axial Attention

Decomposition of multidimensional attention into unidimensional attentions applied sequentially on each axis. Axial attention reduces the complexity from O(n²) to O(n*d) where d is the number of dimensions.

🔍

No results found