🏠 Ana Sayfa
Benchmarklar
📊 Tüm Benchmarklar 🦖 Dinozor v1 🦖 Dinozor v2 ✅ To-Do List Uygulamaları 🎨 Yaratıcı Serbest Sayfalar 🎯 FSACB - Nihai Gösteri 🌍 Çeviri Benchmarkı
Modeller
🏆 En İyi 10 Model 🆓 Ücretsiz Modeller 📋 Tüm Modeller ⚙️ Kilo Code
Kaynaklar
💬 Prompt Kütüphanesi 📖 YZ Sözlüğü 🔗 Faydalı Bağlantılar

YZ Sözlüğü

Yapay Zekanın tam sözlüğü

162
kategoriler
2.032
alt kategoriler
23.060
terimler
📖
terimler

Byte Pair Encoding (BPE)

A data compression algorithm adapted for tokenization that iteratively merges the most frequent character pairs to create an optimized subword vocabulary.

📖
terimler

WordPiece

A variant of BPE developed by Google that maximizes language probability when merging tokens, notably used in BERT models and their variants.

📖
terimler

Unigram Language Model

A tokenization approach based on a unigram language model that selects the best segmentation by maximizing the product probability of tokens in the sequence.

📖
terimler

SentencePiece

A language-independent tokenization library that treats text as a raw unicode sequence, eliminating the need for language-specific preprocessing.

📖
terimler

Vocabulary Size

A critical parameter determining the total number of unique tokens in a model's vocabulary, directly influencing model size and its ability to handle linguistic diversity.

📖
terimler

Special Tokens

Reserved tokens like [CLS], [SEP], [MASK], [PAD] used to delimit sequences, mask elements, or pad batches to a uniform length.

📖
terimler

Tokenizer Training

The machine learning process of learning vocabulary and segmentation rules from a text corpus, optimizing representation for a specific task or domain.

📖
terimler

Subword Regularization

A data augmentation technique applying different possible segmentations of the same text during training, improving model robustness and generalization.

📖
terimler

Vocabulary Truncation

Process of limiting the vocabulary to the N most frequent tokens, replacing less frequent tokens with subwords or an [UNK] token to optimize computational efficiency.

📖
terimler

Tokenization Pipeline

Sequential chain of preprocessing steps including normalization, pre-tokenization, model segmentation, and post-processing to produce the final tokens.

📖
terimler

Tokenizer Config

JSON configuration file containing all the hyperparameters and metadata necessary to exactly reproduce the behavior of a specific tokenizer.

📖
terimler

Fast Tokenizers

Optimized tokenizer implementations using Rust and efficient data structures, offering 10-100x better performance than pure Python implementations.

📖
terimler

Tokenizer Inference

Phase of applying a trained tokenizer to new text data, converting raw text into token sequences ready for processing by the model.

🔍

Sonuç bulunamadı