Słownik AI
Kompletny słownik sztucznej inteligencji
Graph Transformer
Neural architecture combining Transformer attention mechanisms with graph structure to capture global and local dependencies in relational data.
Attention sur les graphes
Mechanism adapted from Transformer attention that calculates relative importance between graph nodes while considering their structural connectivity.
Positional encoding pour graphes
Positional encoding technique adapted for graphs that incorporates structural information like distances, degrees, or paths to represent relative positions of nodes.
Self-attention sur les nœuds
Operation where each graph node calculates attention weights on all other nodes, including itself, to capture long-range dependencies.
Graph Attention Network (GAT)
Pioneering architecture introducing masked attention in GNNs, where attention weights are calculated only between directly neighboring nodes.
Propagation de messages
Fundamental process in GNNs where nodes exchange and aggregate information with their neighbors to update their latent representations.
Mécanisme d'attention multi-tête
Extension of attention where multiple attention heads independently calculate attention weights, allowing capture of different types of relationships in the graph.
Edge embedding
Vector representation of graph edges capturing their intrinsic characteristics and the relationships between the nodes they connect.
Transformer-XL for Graphs
Adapted extension of Transformer-XL that handles long-range dependencies in graphs through a segment-level caching mechanism.
GraphBERT
Pre-trained architecture specifically designed for graphs using masked Transformers and self-supervised training strategies.
Graphormer
Pure Transformer architecture for graphs using centrality-based positional encodings and structured attention mechanisms.
Edge Attention
Attention variant where weights are computed on edges rather than nodes, allowing direct modeling of relationship importance.
Heterogeneous Graph Transformer
Extension of Graph Transformers adapted for heterogeneous graphs with different node and edge types using type-specific attention mechanisms.
Structured Attention
Attention mechanism that explicitly integrates structural information like paths, cycles, or graph motifs into the attention weight computation.
Cross-attention between Nodes
Attention operation where queries, keys, and values come from different node representations, enabling more complex interactions.