🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

Metric embedding space

Low-dimensional vector space where distances between points reflect semantic similarities between classes, optimized for few-shot classification tasks.

📖
termini

Class prototype

Central vector representation of a class, calculated as the average of embeddings of support examples of that class in the metric space.

📖
termini

Support set

Subset of training data used to calculate class prototypes during a meta-learning episode, containing a limited number of examples per class.

📖
termini

Query set

Set of examples to be classified used to evaluate and update model performance during training, without participating in the calculation of prototypes.

📖
termini

N-way K-shot

Few-shot learning paradigm where N represents the number of classes to discriminate and K the number of available examples per class in the support set.

📖
termini

Euclidean distance in embedding

Similarity measure used to classify query examples by calculating their distance to class prototypes in the learned embedding space.

📖
termini

Softmax on distances

Activation function transforming negative distances to prototypes into probability distribution over classes, used for classifying query examples.

📖
termini

Meta-learning episode

Training unit simulating a complete few-shot task, including the construction of prototypes from the support set and classification of the query set.

📖
termini

Encoding function

Parameterized neural network that transforms raw inputs into embedding vectors, optimized to minimize intra-class distances and maximize inter-class distances.

📖
termini

Distance-based cross-entropy loss

Objective function that minimizes the divergence between the predicted distribution based on distances to prototypes and the true labels of query examples.

📖
termini

Prototype initialization

Process of initial computation of class representations before training, often performed by random averaging or pre-training on more abundant data.

📖
termini

Zero-shot transfer

Ability of Prototypical Networks to generalize to unseen classes during training using only descriptions or semantic attributes.

📖
termini

Online prototype updates

Dynamic adaptation of class representations during inference by gradually incorporating new examples to refine decision boundaries.

📖
termini

Weighted example aggregation

Improved variant of prototype computation using weights for each support example based on their quality or relevance to the class representation.

📖
termini

Metric latent space

Learned intermediate representation where the geometric structure preserves similarity relationships between classes, facilitating linear separation in the prototype space.

📖
termini

Prototype regularization

Technique that prevents overfitting by constraining prototypes to remain in specific regions of the embedding space or by penalizing their excessive dispersion.

📖
termini

Episode-based learning

Training strategy where each batch contains several independent few-shot episodes, allowing the model to learn to quickly adapt to new tasks.

📖
termini

Adaptive Mahalanobis Distance

Extension of Prototypical Networks using a learned distance metric that takes into account the covariance of data in each class for better separation.

📖
termini

Dynamic Prototype

Variant where prototypes are not fixed but adapt based on the context or specific characteristics of each query example to be classified.

🔍

Nessun risultato trovato