🏠 Startseite
Vergleiche
📊 Alle Benchmarks 🦖 Dinosaurier v1 🦖 Dinosaurier v2 ✅ To-Do-Listen-Apps 🎨 Kreative freie Seiten 🎯 FSACB - Ultimatives Showcase 🌍 Übersetzungs-Benchmark
Modelle
🏆 Top 10 Modelle 🆓 Kostenlose Modelle 📋 Alle Modelle ⚙️ Kilo Code
Ressourcen
💬 Prompt-Bibliothek 📖 KI-Glossar 🔗 Nützliche Links

KI-Glossar

Das vollständige Wörterbuch der Künstlichen Intelligenz

162
Kategorien
2.032
Unterkategorien
23.060
Begriffe
📖
Begriffe

Graph Transformer

Neural architecture combining Transformer attention mechanisms with graph structure to capture global and local dependencies in relational data.

📖
Begriffe

Attention sur les graphes

Mechanism adapted from Transformer attention that calculates relative importance between graph nodes while considering their structural connectivity.

📖
Begriffe

Positional encoding pour graphes

Positional encoding technique adapted for graphs that incorporates structural information like distances, degrees, or paths to represent relative positions of nodes.

📖
Begriffe

Self-attention sur les nœuds

Operation where each graph node calculates attention weights on all other nodes, including itself, to capture long-range dependencies.

📖
Begriffe

Graph Attention Network (GAT)

Pioneering architecture introducing masked attention in GNNs, where attention weights are calculated only between directly neighboring nodes.

📖
Begriffe

Propagation de messages

Fundamental process in GNNs where nodes exchange and aggregate information with their neighbors to update their latent representations.

📖
Begriffe

Mécanisme d'attention multi-tête

Extension of attention where multiple attention heads independently calculate attention weights, allowing capture of different types of relationships in the graph.

📖
Begriffe

Edge embedding

Vector representation of graph edges capturing their intrinsic characteristics and the relationships between the nodes they connect.

📖
Begriffe

Transformer-XL for Graphs

Adapted extension of Transformer-XL that handles long-range dependencies in graphs through a segment-level caching mechanism.

📖
Begriffe

GraphBERT

Pre-trained architecture specifically designed for graphs using masked Transformers and self-supervised training strategies.

📖
Begriffe

Graphormer

Pure Transformer architecture for graphs using centrality-based positional encodings and structured attention mechanisms.

📖
Begriffe

Edge Attention

Attention variant where weights are computed on edges rather than nodes, allowing direct modeling of relationship importance.

📖
Begriffe

Heterogeneous Graph Transformer

Extension of Graph Transformers adapted for heterogeneous graphs with different node and edge types using type-specific attention mechanisms.

📖
Begriffe

Structured Attention

Attention mechanism that explicitly integrates structural information like paths, cycles, or graph motifs into the attention weight computation.

📖
Begriffe

Cross-attention between Nodes

Attention operation where queries, keys, and values come from different node representations, enabling more complex interactions.

🔍

Keine Ergebnisse gefunden