🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Attention Head Analysis

Process of examining and interpreting the attention weights produced by each head to understand the specific patterns and relationships that each head has learned to capture.

📖
terms

Head Specialization

Phenomenon where different attention heads in the same layer specialize to learn distinct types of linguistic relationships, such as syntax, semantics, or long-range dependencies.

📖
terms

Attention Weight Matrix

Square matrix generated by an attention head, where each element (i, j) represents the importance or relevance score of token j for token i in the context of the sequence.

📖
terms

Attention Map

Visualization of the attention weight matrix, often in the form of a heatmap, which graphically illustrates the focus relationships of an attention head on an input sequence.

📖
terms

Syntactic Role

Type of relationship, such as subject-verb binding or dependency between a noun and its adjective, that a specialized attention head can learn to detect and model.

📖
terms

Positional Role

Function of an attention head that primarily focuses on relative positional relationships between tokens, helping the model understand word order regardless of their semantic content.

📖
terms

Positional Head

Attention head whose attention weights reveal patterns strongly related to the relative distance between tokens, acting as a mechanism to encode sequential structure.

📖
terms

Subword Head

Attention head specialized in managing relationships between word fragments (subwords) generated by tokenizers like BPE, helping to reconstruct lexical coherence.

📖
terms

Retrieval Head

Attention head identified in large models that behaves as an information retrieval mechanism, strongly connecting to specific tokens that act as 'keys' for memorized knowledge.

📖
terms

Head Redundancy

Observation that certain attention heads in an over-parameterized model learn very similar or identical functions, suggesting potential inefficiency in resource usage.

📖
terms

Attention Head Pruning

Model compression technique that involves identifying and removing attention heads deemed redundant or unimportant to reduce model size and computational cost with minimal impact on performance.

📖
terms

Head Importance Score

Quantitative metric, often derived from the sensitivity of the loss or model performance to the removal of a head, used to rank heads by their contribution to overall functioning.

📖
terms

Head Induction Analysis

Methodology that involves training a simple supervised model (such as a linear classifier) on the outputs of an attention head to discover the underlying function that this head has learned to represent.

📖
terms

Diagonal Attention Pattern

Attention weight pattern where a head focuses primarily on the token itself (self-attention), often observed in lower layers to refine local representations.

📖
terms

Vertical Attention Pattern

Pattern where an attention head focuses on a specific reference token (often the beginning-of-sequence token or a class marker) for all positions, aggregating information for a classification task.

📖
terms

Block Attention Pattern

Pattern where an attention head focuses on contiguous segments of the sequence, indicating specialization in processing local phrases or clauses.

📖
terms

Translation Head

In multilingual models, an attention head that learns to align words and phrases between different languages, facilitating the transfer of linguistic knowledge.

📖
terms

Multi-Head Attention Mechanism

Fundamental component of Transformers that executes multiple attention heads in parallel, concatenates their outputs and projects them to allow the model to focus on different positions and different representation spaces simultaneously.

📖
terms

Head Interpretability

Research field aimed at developing methods to understand, quantify and visualize the specific function of each attention head in order to demystify the internal workings of Transformer models.

🔍

No results found