🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Auto-regression

Generation process where each token is predicted sequentially based on all previous tokens, enabling progressive and coherent text construction.

📖
용어

Decoder-Only Architecture

Transformer model structure that eliminates encoders to focus solely on the decoder, optimized for text generation using masked attention to prevent future information leakage.

📖
용어

Multi-Head Attention Mechanism

Technique allowing the model to simultaneously focus on different positions in the input sequence through multiple independent attention heads, capturing various types of dependencies.

📖
용어

BPE Tokenization

Byte-Pair Encoding algorithm that segments text into optimal subwords, balancing vocabulary size and semantic coverage for efficient natural language processing.

📖
용어

Causal Attention Mask

Binary matrix applied during attention to prevent each position from attending to future positions, thus preserving the causal nature of text generation.

📖
용어

Model Parameters

Trainable weights of the neural network, whose number characterizes the model's capacity, ranging from millions to billions depending on the desired complexity and performance.

📖
용어

Temperature Sampling

Parameter controlling the degree of randomness in generation, where high values increase diversity and low values favor safer and more coherent predictions.

📖
용어

Context Window

Maximum number of tokens the model can consider simultaneously during generation, determining its ability to maintain coherence over long texts.

🔍

결과를 찾을 수 없습니다