🏠 Startseite
Vergleiche
📊 Alle Benchmarks 🦖 Dinosaurier v1 🦖 Dinosaurier v2 ✅ To-Do-Listen-Apps 🎨 Kreative freie Seiten 🎯 FSACB - Ultimatives Showcase 🌍 Übersetzungs-Benchmark
Modelle
🏆 Top 10 Modelle 🆓 Kostenlose Modelle 📋 Alle Modelle ⚙️ Kilo Code
Ressourcen
💬 Prompt-Bibliothek 📖 KI-Glossar 🔗 Nützliche Links

KI-Glossar

Das vollständige Wörterbuch der Künstlichen Intelligenz

162
Kategorien
2.032
Unterkategorien
23.060
Begriffe
📂
Unterkategorien

Self-Attention

Fundamental mechanism allowing transformers to dynamically compute the relative importance of each element in a sequence compared to others.

2 Begriffe
📂
Unterkategorien

Multi-Head Attention

Extension of self-attention where multiple attention heads operate in parallel to capture different types of relationships in the data.

4 Begriffe
📂
Unterkategorien

Positional Encoding

Technique that incorporates sequential position information into embeddings to compensate for the absence of recurrence in transformers.

6 Begriffe
📂
Unterkategorien

Encoder-Decoder Architecture

Fundamental structure of original transformers combining an encoder to process input and a decoder to generate output.

8 Begriffe
📂
Unterkategorien

BERT (Bidirectional Encoder Representations)

Family of pre-trained models based on the encoder-only architecture with bidirectional context understanding.

10 Begriffe
📂
Unterkategorien

GPT (Generative Pre-trained Transformer)

Decoder-only architecture optimized for autoregressive text generation, forming the basis of large language models.

5 Begriffe
📂
Unterkategorien

Vision Transformers (ViT)

Application of transformer architectures to image processing by dividing images into patches and treating them as sequences.

11 Begriffe
📂
Unterkategorien

Sparse Attention Mechanisms

Variants of attention reducing computational complexity by limiting connections between sequence elements.

2 Begriffe
📂
Unterkategorien

Cross-Attention

Attention mechanism where queries come from one sequence while keys and values come from a different sequence.

2 Begriffe
📂
Unterkategorien

Transformer Scaling Laws

Empirical principles describing how transformer performance evolves with model size, data, and computation.

18 Begriffe
📂
Unterkategorien

Attention Head Analysis

Study of the specialized roles of different attention heads in transformers to understand their internal functioning.

19 Begriffe
📂
Unterkategorien

Hierarchical Attention

Hierarchical attention architecture organized across multiple levels to process complex structured data.

9 Begriffe
🔍

Keine Ergebnisse gefunden