🏠 Home
Prestatietests
📊 Alle benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List applicaties 🎨 Creatieve vrije pagina's 🎯 FSACB - Ultieme showcase 🌍 Vertaalbenchmark
Modellen
🏆 Top 10 modellen 🆓 Gratis modellen 📋 Alle modellen ⚙️ Kilo Code
Bronnen
💬 Promptbibliotheek 📖 AI-woordenlijst 🔗 Nuttige links

AI-woordenlijst

Het complete woordenboek van kunstmatige intelligentie

162
categorieën
2.032
subcategorieën
23.060
termen
📂
subcategorieën

Self-Attention

Fundamental mechanism allowing transformers to dynamically compute the relative importance of each element in a sequence compared to others.

2 termen
📂
subcategorieën

Multi-Head Attention

Extension of self-attention where multiple attention heads operate in parallel to capture different types of relationships in the data.

4 termen
📂
subcategorieën

Positional Encoding

Technique that incorporates sequential position information into embeddings to compensate for the absence of recurrence in transformers.

6 termen
📂
subcategorieën

Encoder-Decoder Architecture

Fundamental structure of original transformers combining an encoder to process input and a decoder to generate output.

8 termen
📂
subcategorieën

BERT (Bidirectional Encoder Representations)

Family of pre-trained models based on the encoder-only architecture with bidirectional context understanding.

10 termen
📂
subcategorieën

GPT (Generative Pre-trained Transformer)

Decoder-only architecture optimized for autoregressive text generation, forming the basis of large language models.

5 termen
📂
subcategorieën

Vision Transformers (ViT)

Application of transformer architectures to image processing by dividing images into patches and treating them as sequences.

11 termen
📂
subcategorieën

Sparse Attention Mechanisms

Variants of attention reducing computational complexity by limiting connections between sequence elements.

2 termen
📂
subcategorieën

Cross-Attention

Attention mechanism where queries come from one sequence while keys and values come from a different sequence.

2 termen
📂
subcategorieën

Transformer Scaling Laws

Empirical principles describing how transformer performance evolves with model size, data, and computation.

18 termen
📂
subcategorieën

Attention Head Analysis

Study of the specialized roles of different attention heads in transformers to understand their internal functioning.

19 termen
📂
subcategorieën

Hierarchical Attention

Hierarchical attention architecture organized across multiple levels to process complex structured data.

9 termen
🔍

Geen resultaten gevonden