🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili
📖
Efficient Transformers

Axial Attention

Decomposition of multidimensional attention into unidimensional attentions applied sequentially on each axis. Axial attention reduces the complexity from O(n²) to O(n*d) where d is the number of dimensions.

← Indietro