🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Autoencoder

Unsupervised neural network learning to compress input data into a lower-dimensional latent space and then reconstruct the original data from this compressed representation.

📖
terms

Denoising autoencoder

Variant of autoencoder trained to reconstruct original data from a noise-corrupted version, thereby forcing the model to learn robust and invariant representations.

📖
terms

Reconstruction loss function

Metric measuring the difference between the original input and its reconstruction, typically mean squared error or binary cross-entropy for binary images.

📖
terms

Bottleneck

Intermediate layer of minimal dimension in an autoencoder that forces information compression and extraction of the most relevant features from input data.

📖
terms

Variational autoencoder

Generative autoencoder that learns a probabilistic distribution in the latent space rather than a deterministic representation, enabling the generation of new data by sampling.

📖
terms

Gaussian noise

Addition of random noise following a Gaussian distribution to input data before reconstruction, a common technique to improve model robustness and generalization.

📖
terms

Sparse autoencoder

Autoencoder incorporating a sparsity constraint on hidden layer activations, encouraging the model to use only a subset of neurons to represent each input.

📖
terms

Contractive autoencoder

Autoencoder penalizing the sensitivity of the representation to small variations in the input, promoting the learning of invariant and stable features.

📖
terms

Convolutional autoencoder

Autoencoder using convolutional layers to efficiently process structured data such as images, preserving local spatial relationships during compression and reconstruction.

📖
terms

Distributed representation

Information encoding where each concept is represented by the combined activation of multiple neurons, enabling a rich and semantic representation in the latent space.

📖
terms

Implicit denoising

Emergent property where the autoencoder automatically learns to denoise data even without explicit corruption, thanks to the compression constraint of the bottleneck.

📖
terms

Fine-tuning by reconstruction

Secondary training phase where a pre-trained autoencoder is refined on a specific task using reconstruction loss as the optimization signal.

📖
terms

Deep autoencoder

Autoencoder architecture with multiple hidden layers, enabling hierarchical extraction of increasingly abstract features from input data.

📖
terms

Overfitting in reconstruction

Phenomenon where the model memorizes training examples instead of learning generalizable representations, detectable by low reconstruction error but poor performance on new data.

🔍

No results found