🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Sinusoidal Positional Encoding

Positional encoding method using sinusoidal functions of different frequencies to create unique and deterministic position representations without parameter learning.

📖
용어

Learned Positional Encoding

Approach where position embeddings are learned as trainable model parameters, allowing adaptive optimization to specific training data.

📖
용어

Relative Positional Encoding

Advanced technique that encodes relative distances between tokens rather than their absolute positions, improving generalization to variable sequence lengths.

📖
용어

Absolute Positional Encoding

Traditional positional encoding method where each position in the sequence receives a unique embedding based on its absolute index in the sequence.

📖
용어

Rotary Positional Encoding (RoPE)

Innovative technique that applies matrix rotation to query and key embeddings, effectively integrating position information directly into the attention mechanism.

📖
용어

Alibi Positional Encoding

Method that penalizes attention scores based on the distance between tokens, enabling effective extrapolation to longer sequence lengths without retraining.

📖
용어

Position Embeddings

Dense vectors representing the position of each token in a sequence, added or concatenated to token embeddings to provide spatial or temporal location information.

📖
용어

Attention with Positional Encoding

Integration of positional encoding into the attention mechanism to allow the model to weight tokens differently based on their relative positions in the sequence.

📖
용어

BERT Positional Embeddings

Specific implementation of learned positional encoding in the BERT architecture, using trainable position embeddings with a fixed maximum sequence length of 512 tokens.

📖
용어

GPT Positional Encoding

Positional encoding system used in GPT models, initially based on learned position embeddings to effectively model directional dependencies in text.

📖
용어

Transformer Positional Encoding

Essential component of the original Transformer architecture using sinusoidal encodings to allow the model to use token order without recurrent mechanisms.

📖
용어

3D Positional Encoding

Extension of positional encoding to three-dimensional data like volumes or videos, incorporating position information on three spatial or temporal axes.

📖
용어

Complex Positional Encoding

Advanced variant using complex numbers to represent positions, enabling richer modeling of spatial relationships and multiple frequencies.

📖
용어

Hierarchical Positional Encoding

Structured approach that encodes positions at multiple levels of granularity, capturing both local and global positions in the sequence.

📖
용어

T5 Positional Encoding

Specific implementation in the T5 architecture using scalar position embeddings added to token embeddings, designed to simplify the architecture while maintaining performance.

📖
용어

XLNet Relative Positional Encoding

Sophisticated mechanism in XLNet that models relative distances between tokens in attention computation, enabling better generalization across different sequence lengths.

📖
용어

DeBERTa Disentangled Attention

Innovation in DeBERTa that explicitly separates content and position in the attention mechanism, using disentangled positional encoding to improve representation.

📖
용어

Longformer Positional Encoding

Positional encoding system adapted for processing long sequences, combining global and local position embeddings to optimize sliding window attention.

📖
용어

Reformer Locality Sensitive Hashing

Specialized technique in Reformer that uses LSH with positional encoding to reduce computational complexity of attention on very long sequences.

🔍

결과를 찾을 수 없습니다