🏠 Ana Sayfa
Benchmarklar
📊 Tüm Benchmarklar 🦖 Dinozor v1 🦖 Dinozor v2 ✅ To-Do List Uygulamaları 🎨 Yaratıcı Serbest Sayfalar 🎯 FSACB - Nihai Gösteri 🌍 Çeviri Benchmarkı
Modeller
🏆 En İyi 10 Model 🆓 Ücretsiz Modeller 📋 Tüm Modeller ⚙️ Kilo Code
Kaynaklar
💬 Prompt Kütüphanesi 📖 YZ Sözlüğü 🔗 Faydalı Bağlantılar

YZ Sözlüğü

Yapay Zekanın tam sözlüğü

162
kategoriler
2.032
alt kategoriler
23.060
terimler
📖
terimler

Graph Transformer

Neural architecture combining Transformer attention mechanisms with graph structure to capture global and local dependencies in relational data.

📖
terimler

Attention sur les graphes

Mechanism adapted from Transformer attention that calculates relative importance between graph nodes while considering their structural connectivity.

📖
terimler

Positional encoding pour graphes

Positional encoding technique adapted for graphs that incorporates structural information like distances, degrees, or paths to represent relative positions of nodes.

📖
terimler

Self-attention sur les nœuds

Operation where each graph node calculates attention weights on all other nodes, including itself, to capture long-range dependencies.

📖
terimler

Graph Attention Network (GAT)

Pioneering architecture introducing masked attention in GNNs, where attention weights are calculated only between directly neighboring nodes.

📖
terimler

Propagation de messages

Fundamental process in GNNs where nodes exchange and aggregate information with their neighbors to update their latent representations.

📖
terimler

Mécanisme d'attention multi-tête

Extension of attention where multiple attention heads independently calculate attention weights, allowing capture of different types of relationships in the graph.

📖
terimler

Edge embedding

Vector representation of graph edges capturing their intrinsic characteristics and the relationships between the nodes they connect.

📖
terimler

Transformer-XL for Graphs

Adapted extension of Transformer-XL that handles long-range dependencies in graphs through a segment-level caching mechanism.

📖
terimler

GraphBERT

Pre-trained architecture specifically designed for graphs using masked Transformers and self-supervised training strategies.

📖
terimler

Graphormer

Pure Transformer architecture for graphs using centrality-based positional encodings and structured attention mechanisms.

📖
terimler

Edge Attention

Attention variant where weights are computed on edges rather than nodes, allowing direct modeling of relationship importance.

📖
terimler

Heterogeneous Graph Transformer

Extension of Graph Transformers adapted for heterogeneous graphs with different node and edge types using type-specific attention mechanisms.

📖
terimler

Structured Attention

Attention mechanism that explicitly integrates structural information like paths, cycles, or graph motifs into the attention weight computation.

📖
terimler

Cross-attention between Nodes

Attention operation where queries, keys, and values come from different node representations, enabling more complex interactions.

🔍

Sonuç bulunamadı