🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

FastText

Extension of Word2Vec developed by Facebook that represents each word as the sum of its character n-gram vectors, allowing handling of out-of-vocabulary words and complex morphologies.

📖
termini

Contextual Embeddings

Dynamic vector representations whose values change according to the usage context of the word, unlike static embeddings that assign a unique vector per word.

📖
termini

Static Embeddings

Fixed vector representations where each word has a single vector representation independent of its context, as in classic Word2Vec or GloVe.

📖
termini

Skip-gram

Training architecture that predicts context words from a central word, excellent for capturing semantic relationships with small corpora.

📖
termini

CBOW

Continuous Bag of Words, model that predicts a central word from the sum of vectors of its context words, efficient for training on large corpora.

📖
termini

Subword Embeddings

Vector representation technique that decomposes words into smaller units (characters, morphemes) to handle open vocabulary and capture morphological information.

📖
termini

ELMo

Embeddings from Language Models, approach that generates contextual embeddings by combining hidden states of bidirectional LSTM networks pretrained on vast corpora.

📖
termini

Sentence Embeddings

Vector representations that encode entire sentences into unique vectors, capturing global meaning and semantic structure at the sentence level.

📖
termini

Doc2Vec

Extension of Word2Vec that generates embeddings for entire documents by introducing a document identifier as additional context during training.

📖
termini

Universal Sentence Encoder

Google model that transforms texts into high-dimensional embeddings, optimized for semantic similarity and text classification tasks.

📖
termini

RoBERTa

Robustly Optimized BERT Pretraining Approach, improved version of BERT with longer pre-training on more data and optimized hyperparameters.

📖
termini

Embedding Layer

First layer of NLP neural networks that transforms token indices into dense vectors, learning these representations during training.

📖
termini

Vector Space Model

Algebraic representation where words are points in a multidimensional space, allowing mathematical operations to measure semantic similarities.

🔍

Nessun risultato trovato