🏠 Startseite
Vergleiche
📊 Alle Benchmarks 🦖 Dinosaurier v1 🦖 Dinosaurier v2 ✅ To-Do-Listen-Apps 🎨 Kreative freie Seiten 🎯 FSACB - Ultimatives Showcase 🌍 Übersetzungs-Benchmark
Modelle
🏆 Top 10 Modelle 🆓 Kostenlose Modelle 📋 Alle Modelle ⚙️ Kilo Code
Ressourcen
💬 Prompt-Bibliothek 📖 KI-Glossar 🔗 Nützliche Links

KI-Glossar

Das vollständige Wörterbuch der Künstlichen Intelligenz

162
Kategorien
2.032
Unterkategorien
23.060
Begriffe
📖
Begriffe

Attention Head Analysis

Process of examining and interpreting the attention weights produced by each head to understand the specific patterns and relationships that each head has learned to capture.

📖
Begriffe

Head Specialization

Phenomenon where different attention heads in the same layer specialize to learn distinct types of linguistic relationships, such as syntax, semantics, or long-range dependencies.

📖
Begriffe

Attention Weight Matrix

Square matrix generated by an attention head, where each element (i, j) represents the importance or relevance score of token j for token i in the context of the sequence.

📖
Begriffe

Attention Map

Visualization of the attention weight matrix, often in the form of a heatmap, which graphically illustrates the focus relationships of an attention head on an input sequence.

📖
Begriffe

Syntactic Role

Type of relationship, such as subject-verb binding or dependency between a noun and its adjective, that a specialized attention head can learn to detect and model.

📖
Begriffe

Positional Role

Function of an attention head that primarily focuses on relative positional relationships between tokens, helping the model understand word order regardless of their semantic content.

📖
Begriffe

Positional Head

Attention head whose attention weights reveal patterns strongly related to the relative distance between tokens, acting as a mechanism to encode sequential structure.

📖
Begriffe

Subword Head

Attention head specialized in managing relationships between word fragments (subwords) generated by tokenizers like BPE, helping to reconstruct lexical coherence.

📖
Begriffe

Retrieval Head

Attention head identified in large models that behaves as an information retrieval mechanism, strongly connecting to specific tokens that act as 'keys' for memorized knowledge.

📖
Begriffe

Head Redundancy

Observation that certain attention heads in an over-parameterized model learn very similar or identical functions, suggesting potential inefficiency in resource usage.

📖
Begriffe

Attention Head Pruning

Model compression technique that involves identifying and removing attention heads deemed redundant or unimportant to reduce model size and computational cost with minimal impact on performance.

📖
Begriffe

Head Importance Score

Quantitative metric, often derived from the sensitivity of the loss or model performance to the removal of a head, used to rank heads by their contribution to overall functioning.

📖
Begriffe

Head Induction Analysis

Methodology that involves training a simple supervised model (such as a linear classifier) on the outputs of an attention head to discover the underlying function that this head has learned to represent.

📖
Begriffe

Diagonal Attention Pattern

Attention weight pattern where a head focuses primarily on the token itself (self-attention), often observed in lower layers to refine local representations.

📖
Begriffe

Vertical Attention Pattern

Pattern where an attention head focuses on a specific reference token (often the beginning-of-sequence token or a class marker) for all positions, aggregating information for a classification task.

📖
Begriffe

Block Attention Pattern

Pattern where an attention head focuses on contiguous segments of the sequence, indicating specialization in processing local phrases or clauses.

📖
Begriffe

Translation Head

In multilingual models, an attention head that learns to align words and phrases between different languages, facilitating the transfer of linguistic knowledge.

📖
Begriffe

Multi-Head Attention Mechanism

Fundamental component of Transformers that executes multiple attention heads in parallel, concatenates their outputs and projects them to allow the model to focus on different positions and different representation spaces simultaneously.

📖
Begriffe

Head Interpretability

Research field aimed at developing methods to understand, quantify and visualize the specific function of each attention head in order to demystify the internal workings of Transformer models.

🔍

Keine Ergebnisse gefunden