Glossario IA
Il dizionario completo dell'Intelligenza Artificiale
GAN (Generative Adversarial Network)
Unsupervised learning architecture composed of two competitive neural networks that compete with each other to generate realistic synthetic data from random noise.
Discriminator
Neural network in a GAN trained to distinguish real data from artificially generated data, serving as a binary classifier in the adversarial training process.
Generator
Neural network in a GAN that transforms a latent noise vector into synthetic data, progressively learning to create increasingly realistic samples to fool the discriminator.
VAE (Variational Autoencoder)
Generative architecture based on variational inference that learns a probabilistic distribution in the latent space to generate new data while allowing continuous interpolation.
Variational Encoder
Part of a VAE that maps input data to the parameters (mean and variance) of a Gaussian distribution in the latent space, enabling stochastic sampling during generation.
Variational Decoder
Component of a VAE that reconstructs original data from samples of the latent space, learning to map latent points to realistic generations.
KL Divergence (Kullback-Leibler)
Measure of dissimilarity between two probability distributions used as a regularization term in VAEs to constrain the latent space to follow a standard Gaussian distribution.
Mode Collapse
Phenomenon in GANs where the generator produces only a limited number of distinct output types, ignoring the diversity of the training dataset and artificially minimizing the adversarial loss.
Latent Space
Reduced-dimensional vector space where data are represented in a compact form, allowing interpolation, arithmetic, and sampling operations for generating new data.
Pix2Pix
Conditional GAN architecture for image-to-image translation using paired image sets, applying adversarial loss combined with L1 loss to ensure structural consistency.
CycleGAN
GAN architecture capable of learning translation between domains without paired image sets, using cycle consistency loss to preserve characteristics of the original image.
StyleGAN
Advanced GAN architecture using a mapping network and adaptive style blocks to hierarchically control visual features at different spatial scales in image generation.
Deep Convolutional GAN (DCGAN)
Pioneering GAN architecture exclusively using convolutional layers with specific architectural constraints like the absence of pooling and the use of batch normalization to stabilize training.
Wasserstein GAN (WGAN)
GAN variant using the Wasserstein distance as training metric, offering better stability and reducing mode collapse thanks to more significant gradients.
Reconstruction Loss
Loss function in autoencoders measuring the difference between original input and reconstructed output, typically implemented as mean squared error or binary cross-entropy.
Adversarial Loss
Loss function based on the zero-sum game between generator and discriminator, forcing the generator to minimize the discriminator's ability to distinguish real data from generated data.
Feature Matching
Regularization technique in GANs where the generator minimizes the distance between features extracted by the discriminator for real and generated data, improving training stability.
Instance Normalization
Normalization technique applied individually to each sample in a batch, particularly effective in style networks and GANs to decouple style from content in image generation.
Progressive Growing of GANs
Training strategy where the resolution of the generator and discriminator progressively increases, starting with low-resolution images and adding layers successively to achieve high-resolution generations.