🏠 Strona Główna
Benchmarki
📊 Wszystkie benchmarki 🦖 Dinozaur v1 🦖 Dinozaur v2 ✅ Aplikacje To-Do List 🎨 Kreatywne wolne strony 🎯 FSACB - Ostateczny pokaz 🌍 Benchmark tłumaczeń
Modele
🏆 Top 10 modeli 🆓 Darmowe modele 📋 Wszystkie modele ⚙️ Kilo Code
Zasoby
💬 Biblioteka promptów 📖 Słownik AI 🔗 Przydatne linki

Słownik AI

Kompletny słownik sztucznej inteligencji

162
kategorie
2 032
podkategorie
23 060
pojęcia
📖
pojęcia

FastText

Extension of Word2Vec developed by Facebook that represents each word as the sum of its character n-gram vectors, allowing handling of out-of-vocabulary words and complex morphologies.

📖
pojęcia

Contextual Embeddings

Dynamic vector representations whose values change according to the usage context of the word, unlike static embeddings that assign a unique vector per word.

📖
pojęcia

Static Embeddings

Fixed vector representations where each word has a single vector representation independent of its context, as in classic Word2Vec or GloVe.

📖
pojęcia

Skip-gram

Training architecture that predicts context words from a central word, excellent for capturing semantic relationships with small corpora.

📖
pojęcia

CBOW

Continuous Bag of Words, model that predicts a central word from the sum of vectors of its context words, efficient for training on large corpora.

📖
pojęcia

Subword Embeddings

Vector representation technique that decomposes words into smaller units (characters, morphemes) to handle open vocabulary and capture morphological information.

📖
pojęcia

ELMo

Embeddings from Language Models, approach that generates contextual embeddings by combining hidden states of bidirectional LSTM networks pretrained on vast corpora.

📖
pojęcia

Sentence Embeddings

Vector representations that encode entire sentences into unique vectors, capturing global meaning and semantic structure at the sentence level.

📖
pojęcia

Doc2Vec

Extension of Word2Vec that generates embeddings for entire documents by introducing a document identifier as additional context during training.

📖
pojęcia

Universal Sentence Encoder

Google model that transforms texts into high-dimensional embeddings, optimized for semantic similarity and text classification tasks.

📖
pojęcia

RoBERTa

Robustly Optimized BERT Pretraining Approach, improved version of BERT with longer pre-training on more data and optimized hyperparameters.

📖
pojęcia

Embedding Layer

First layer of NLP neural networks that transforms token indices into dense vectors, learning these representations during training.

📖
pojęcia

Vector Space Model

Algebraic representation where words are points in a multidimensional space, allowing mathematical operations to measure semantic similarities.

🔍

Nie znaleziono wyników