Glossario IA
Il dizionario completo dell'Intelligenza Artificiale
Variational Autoencoder (VAE)
Generative neural network architecture that learns a probabilistic latent representation of input data, enabling the generation of new samples by sampling from this latent space.
Evidence Lower Bound (ELBO)
Maximization objective in VAEs, representing the lower bound of the marginal log-likelihood of the data, balancing reconstruction and regularization of the latent space.
Approximate Posterior (q(z|x))
Distribution parameterized by the encoder that approximates the true posterior distribution of latent variables conditioned on input data in the VAE framework.
Posterior Collapse
Problem in VAEs where the learned latent distribution becomes identical to the prior distribution, making the encoder useless and generating low-quality samples.
Prior Distribution (p(z))
Probability distribution chosen for latent variables in VAEs, typically a standard Gaussian N(0, I), serving as a regularizer for the latent space.
Deconvolutional Autoencoder
Variant of VAE using deconvolutional layers in the decoder to generate structured data like images, better preserving spatial relationships.
Factor Disentanglement
Desired property where each dimension of a VAE's latent space captures a semantically independent factor of variation in the generated data.
Hierarchy of Latent Variables
Advanced VAE architecture using multiple levels of latent variables to capture features at different scales of abstraction in the data.
Normalizing Flows in VAEs
Integration of normalizing flow transformations to increase the flexibility of the prior distribution or the approximate posterior, enhancing the generative quality of VAEs.