Słownik AI
Kompletny słownik sztucznej inteligencji
Autoencoder
Unsupervised neural network learning to compress input data into a lower-dimensional latent space and then reconstruct the original data from this compressed representation.
Denoising autoencoder
Variant of autoencoder trained to reconstruct original data from a noise-corrupted version, thereby forcing the model to learn robust and invariant representations.
Reconstruction loss function
Metric measuring the difference between the original input and its reconstruction, typically mean squared error or binary cross-entropy for binary images.
Bottleneck
Intermediate layer of minimal dimension in an autoencoder that forces information compression and extraction of the most relevant features from input data.
Variational autoencoder
Generative autoencoder that learns a probabilistic distribution in the latent space rather than a deterministic representation, enabling the generation of new data by sampling.
Gaussian noise
Addition of random noise following a Gaussian distribution to input data before reconstruction, a common technique to improve model robustness and generalization.
Sparse autoencoder
Autoencoder incorporating a sparsity constraint on hidden layer activations, encouraging the model to use only a subset of neurons to represent each input.
Contractive autoencoder
Autoencoder penalizing the sensitivity of the representation to small variations in the input, promoting the learning of invariant and stable features.
Convolutional autoencoder
Autoencoder using convolutional layers to efficiently process structured data such as images, preserving local spatial relationships during compression and reconstruction.
Distributed representation
Information encoding where each concept is represented by the combined activation of multiple neurons, enabling a rich and semantic representation in the latent space.
Implicit denoising
Emergent property where the autoencoder automatically learns to denoise data even without explicit corruption, thanks to the compression constraint of the bottleneck.
Fine-tuning by reconstruction
Secondary training phase where a pre-trained autoencoder is refined on a specific task using reconstruction loss as the optimization signal.
Deep autoencoder
Autoencoder architecture with multiple hidden layers, enabling hierarchical extraction of increasingly abstract features from input data.
Overfitting in reconstruction
Phenomenon where the model memorizes training examples instead of learning generalizable representations, detectable by low reconstruction error but poor performance on new data.