🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Self-Supervised Learning

Learning paradigm where a model learns representations from unlabeled data by creating artificial supervision tasks. This approach leverages vast amounts of data without expensive manual annotation.

📖
terms

Contrastive Learning

Self-supervised learning technique that learns representations by pulling similar positive samples closer and pushing dissimilar negative samples apart in the embedding space. This method maximizes agreement between different augmentations of the same sample.

📖
terms

Pretext Task

Artificial task designed for self-supervised learning that forces the model to learn useful features from unlabeled data. These tasks serve as a pretext to train the model before transfer to downstream tasks.

📖
terms

Momentum Contrast (MoCo)

Contrastive learning framework that maintains a queue of negative samples and uses a momentum encoder to ensure consistency of representations. This approach allows using a large number of negatives without requiring a large batch size.

📖
terms

SimCLR

Simple contrastive learning framework that maximizes agreement between different augmentations of the same sample after passing through a neural network. This approach demonstrates that data augmentation and batch size are crucial for performance.

📖
terms

BYOL

Self-supervised learning method that does not use negative samples, relying instead on two networks with an asymmetric architecture and a predictor. BYOL avoids trivial collapse through gradient stopping and momentum updates.

📖
terms

Feature Representation

Vector encoding of raw data in a latent space where semantic relationships are preserved and exploitable for downstream tasks. Self-supervised learned representations capture generic transferable features.

📖
terms

Unlabeled Data

Raw data without manual annotations that are abundant and inexpensive to collect compared to labeled data. Self-supervised learning effectively exploits this data to pre-train high-performing models.

📖
terms

Embedding Space

Low-dimensional vector space where data is projected to capture their semantic and structural relationships. In self-supervised learning, the goal is to learn a discriminative embedding space.

📖
terms

Negative Sampling

Technique consisting of selecting examples that should be distant from the anchor in the embedding space during contrastive learning. The strategic choice of negatives directly influences the quality of learned representations.

📖
terms

Projection Head

Additional neural network applied after the main encoder to map representations to the space where the contrastive loss is calculated. This head is typically removed when transferring to downstream tasks.

📖
terms

Encoder Architecture

Structure of the neural network responsible for transforming raw data into meaningful vector representations. The choice of architecture (ResNet, Transformer, etc.) influences the model's abstraction capabilities.

📖
terms

DINO

Self-supervised method based on knowledge distillation between two networks without using negative samples. DINO produces representations that naturally capture image semantics and lend themselves well to clustering.

🔍

No results found