🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📂
sottocategorie

Self-Attention

Fundamental mechanism allowing transformers to dynamically compute the relative importance of each element in a sequence compared to others.

2 termini
📂
sottocategorie

Multi-Head Attention

Extension of self-attention where multiple attention heads operate in parallel to capture different types of relationships in the data.

4 termini
📂
sottocategorie

Positional Encoding

Technique that incorporates sequential position information into embeddings to compensate for the absence of recurrence in transformers.

6 termini
📂
sottocategorie

Encoder-Decoder Architecture

Fundamental structure of original transformers combining an encoder to process input and a decoder to generate output.

8 termini
📂
sottocategorie

BERT (Bidirectional Encoder Representations)

Family of pre-trained models based on the encoder-only architecture with bidirectional context understanding.

10 termini
📂
sottocategorie

GPT (Generative Pre-trained Transformer)

Decoder-only architecture optimized for autoregressive text generation, forming the basis of large language models.

5 termini
📂
sottocategorie

Vision Transformers (ViT)

Application of transformer architectures to image processing by dividing images into patches and treating them as sequences.

11 termini
📂
sottocategorie

Sparse Attention Mechanisms

Variants of attention reducing computational complexity by limiting connections between sequence elements.

2 termini
📂
sottocategorie

Cross-Attention

Attention mechanism where queries come from one sequence while keys and values come from a different sequence.

2 termini
📂
sottocategorie

Transformer Scaling Laws

Empirical principles describing how transformer performance evolves with model size, data, and computation.

18 termini
📂
sottocategorie

Attention Head Analysis

Study of the specialized roles of different attention heads in transformers to understand their internal functioning.

19 termini
📂
sottocategorie

Hierarchical Attention

Hierarchical attention architecture organized across multiple levels to process complex structured data.

9 termini
🔍

Nessun risultato trovato