🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

Sparse Transformer

Variant using predictive sparse attention patterns to reduce computational connections while capturing long-range dependencies. The architecture factorizes attention into subsets to optimize processing.

📖
termini

Compressive Transformer

Extension of Transformer-XL that compresses old hidden memories into denser vectors to preserve long-term history. This compression enables efficient storage of extensive contextual information.

📖
termini

Universal Transformer

Adaptive architecture where depth is dynamically determined by an adaptive halting mechanism rather than fixed. Universal Transformer iteratively applies shared-weight transformations with adaptive attention.

📖
termini

Set Transformer

Permutation-invariant architecture based on attention to process data sets without predefined order. Set Transformer uses induced attention blocks and pooling mechanisms for set operations.

📖
termini

Synthesizer

Variant where attention weights are learned directly from position embeddings or generated by small networks, without depending on token content. This approach eliminates the need for QK similarity computations.

📖
termini

Linear Transformer

Architecture using kernelized decomposition of attention to achieve linear complexity in sequence and memory. Linear Transformer replaces softmax with positive kernel functions to enable associative reordering.

📖
termini

Local Attention

Attention mechanism restricted to local neighborhoods around each position, drastically reducing the number of token pairs to consider. This approach is particularly effective for data with strong local structure.

📖
termini

Dilated Attention

Extension of sliding window attention using dilated patterns to capture longer-range dependencies without increasing complexity. The gaps in the pattern allow exponential expansion of the receptive field.

📖
termini

Axial Attention

Decomposition of multidimensional attention into unidimensional attentions applied sequentially on each axis. Axial attention reduces the complexity from O(n²) to O(n*d) where d is the number of dimensions.

🔍

Nessun risultato trovato