Glosarium AI
Kamus lengkap Kecerdasan Buatan
Encoder
Part of the autoencoder that transforms input data into a lower-dimensional representation, called latent space or code, by learning a nonlinear compression function.
Decoder
Part of the autoencoder that takes the compressed representation from the latent space and attempts to reconstruct the original input data, thus learning a decompression function.
Latent Space
The lowest-dimensional representation layer in an autoencoder, which captures the most essential and compressed features of the input data.
Bottleneck
The narrowest layer in the autoencoder architecture, located between the encoder and decoder, which forces the network to learn a concise representation of the data.
Reconstruction Loss
Objective function, often mean squared error (MSE) or cross-entropy, that measures the difference between the original input data and their reconstruction by the decoder.
Symmetric Autoencoder
Autoencoder architecture where the structures of the encoder and decoder are mirror images of each other, with corresponding layer dimensions for compression and decompression.
Undercompleteness
Principle stating that the dimension of an autoencoder's latent space must be lower than that of the input data, thus forcing the network to learn the most relevant features rather than a simple copy.
Tied Weights
Technique where the decoder's weight matrices are the transpose of the encoder's weight matrices, reducing the number of parameters and promoting symmetric reconstruction.
Unit Counting
Process of determining the number of neurons in each layer of the autoencoder, where the number of units progressively decreases through the encoder to reach the bottleneck.
Nonlinear Dimensionality Reduction
Main application of autoencoders, which learn complex data manifolds to project high-dimensional data into a lower-dimensional space, beyond linear methods like PCA.
Representation Learning
Ability of autoencoders to automatically discover abstract features and inherent structures in data without supervised labeling.
Single-Layer Autoencoder
Simplest form of autoencoder with a single hidden layer serving as the bottleneck, equivalent to nonlinear principal component analysis.
Deep Autoencoder
Autoencoder architecture with multiple hidden layers in both the encoder and decoder, allowing learning of complex feature hierarchies for better compression.
Reconstruction Noise
Artifacts or errors introduced in the reconstructed data by an autoencoder, which can be analyzed to understand the limitations of the representation learned by the model.
Encoder Activation Function
Nonlinear function (such as ReLU, sigmoid, or tanh) applied in the encoder layers to enable learning of complex transformations of input data.
Decoder Activation Function
Function applied in the decoder layers, often chosen to match the distribution of input data (for example, sigmoid for data normalized between 0 and 1).
Overcompleteness
Condition where the dimension of the latent space is greater than that of the input data, requiring additional constraints such as regularization to avoid simple identity learning.
Quantization Error
Part of the reconstruction error in an autoencoder that results from compressing continuous information into a finite-dimensional latent space.