🏠 Home
Prestatietests
📊 Alle benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List applicaties 🎨 Creatieve vrije pagina's 🎯 FSACB - Ultieme showcase 🌍 Vertaalbenchmark
Modellen
🏆 Top 10 modellen 🆓 Gratis modellen 📋 Alle modellen ⚙️ Kilo Code
Bronnen
💬 Promptbibliotheek 📖 AI-woordenlijst 🔗 Nuttige links

AI-woordenlijst

Het complete woordenboek van kunstmatige intelligentie

162
categorieën
2.032
subcategorieën
23.060
termen
📖
termen

Bidirectional Encoder

Component that processes the entire input sequence simultaneously, allowing each token to attend to all other tokens, both past and future, for complete contextual understanding.

📖
termen

Autoregressive Decoder

Generation mechanism where the decoder produces the output sequence token by token, based solely on previously generated tokens and the encoder's contextual representation.

📖
termen

Cross-Attention Mechanism

Process in the decoder that allows it to focus on specific parts of the encoder's output, weighting the importance of each input token for generating the current output token.

📖
termen

Causal Masking

Technique applied in the decoder to prevent each position from attending to future positions, thus ensuring the autoregressive nature of generation and preventing information leakage.

📖
termen

Feed-Forward Network

Fully connected neural network applied to each position independently after the attention mechanism, enabling nonlinear transformation and higher-dimensional projection.

📖
termen

Layer Normalization

Regularization technique that stabilizes activations by normalizing features for each individual example, accelerating convergence and improving overall model performance.

📖
termen

Encoder Bottleneck

Fixed-dimensional vector representation, often the final output of the encoder, that condenses all information from the input sequence and serves as a unique context for the decoder during generation.

📖
termen

Token Embeddings

High-dimensional dense vectors that represent each discrete token from the vocabulary in a continuous space, capturing semantic and syntactic information learned during training.

🔍

Geen resultaten gevonden