🏠 Strona Główna
Benchmarki
📊 Wszystkie benchmarki 🦖 Dinozaur v1 🦖 Dinozaur v2 ✅ Aplikacje To-Do List 🎨 Kreatywne wolne strony 🎯 FSACB - Ostateczny pokaz 🌍 Benchmark tłumaczeń
Modele
🏆 Top 10 modeli 🆓 Darmowe modele 📋 Wszystkie modele ⚙️ Kilo Code
Zasoby
💬 Biblioteka promptów 📖 Słownik AI 🔗 Przydatne linki

Słownik AI

Kompletny słownik sztucznej inteligencji

162
kategorie
2 032
podkategorie
23 060
pojęcia
📖
pojęcia

Auto-regression

Generation process where each token is predicted sequentially based on all previous tokens, enabling progressive and coherent text construction.

📖
pojęcia

Decoder-Only Architecture

Transformer model structure that eliminates encoders to focus solely on the decoder, optimized for text generation using masked attention to prevent future information leakage.

📖
pojęcia

Multi-Head Attention Mechanism

Technique allowing the model to simultaneously focus on different positions in the input sequence through multiple independent attention heads, capturing various types of dependencies.

📖
pojęcia

BPE Tokenization

Byte-Pair Encoding algorithm that segments text into optimal subwords, balancing vocabulary size and semantic coverage for efficient natural language processing.

📖
pojęcia

Causal Attention Mask

Binary matrix applied during attention to prevent each position from attending to future positions, thus preserving the causal nature of text generation.

📖
pojęcia

Model Parameters

Trainable weights of the neural network, whose number characterizes the model's capacity, ranging from millions to billions depending on the desired complexity and performance.

📖
pojęcia

Temperature Sampling

Parameter controlling the degree of randomness in generation, where high values increase diversity and low values favor safer and more coherent predictions.

📖
pojęcia

Context Window

Maximum number of tokens the model can consider simultaneously during generation, determining its ability to maintain coherence over long texts.

🔍

Nie znaleziono wyników