🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Graph Transformer

Neural architecture combining Transformer attention mechanisms with graph structure to capture global and local dependencies in relational data.

📖
용어

Attention sur les graphes

Mechanism adapted from Transformer attention that calculates relative importance between graph nodes while considering their structural connectivity.

📖
용어

Positional encoding pour graphes

Positional encoding technique adapted for graphs that incorporates structural information like distances, degrees, or paths to represent relative positions of nodes.

📖
용어

Self-attention sur les nœuds

Operation where each graph node calculates attention weights on all other nodes, including itself, to capture long-range dependencies.

📖
용어

Graph Attention Network (GAT)

Pioneering architecture introducing masked attention in GNNs, where attention weights are calculated only between directly neighboring nodes.

📖
용어

Propagation de messages

Fundamental process in GNNs where nodes exchange and aggregate information with their neighbors to update their latent representations.

📖
용어

Mécanisme d'attention multi-tête

Extension of attention where multiple attention heads independently calculate attention weights, allowing capture of different types of relationships in the graph.

📖
용어

Edge embedding

Vector representation of graph edges capturing their intrinsic characteristics and the relationships between the nodes they connect.

📖
용어

Transformer-XL for Graphs

Adapted extension of Transformer-XL that handles long-range dependencies in graphs through a segment-level caching mechanism.

📖
용어

GraphBERT

Pre-trained architecture specifically designed for graphs using masked Transformers and self-supervised training strategies.

📖
용어

Graphormer

Pure Transformer architecture for graphs using centrality-based positional encodings and structured attention mechanisms.

📖
용어

Edge Attention

Attention variant where weights are computed on edges rather than nodes, allowing direct modeling of relationship importance.

📖
용어

Heterogeneous Graph Transformer

Extension of Graph Transformers adapted for heterogeneous graphs with different node and edge types using type-specific attention mechanisms.

📖
용어

Structured Attention

Attention mechanism that explicitly integrates structural information like paths, cycles, or graph motifs into the attention weight computation.

📖
용어

Cross-attention between Nodes

Attention operation where queries, keys, and values come from different node representations, enabling more complex interactions.

🔍

결과를 찾을 수 없습니다