🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

GAN (Generative Adversarial Network)

Unsupervised learning architecture composed of two competitive neural networks that compete with each other to generate realistic synthetic data from random noise.

📖
termini

Discriminator

Neural network in a GAN trained to distinguish real data from artificially generated data, serving as a binary classifier in the adversarial training process.

📖
termini

Generator

Neural network in a GAN that transforms a latent noise vector into synthetic data, progressively learning to create increasingly realistic samples to fool the discriminator.

📖
termini

VAE (Variational Autoencoder)

Generative architecture based on variational inference that learns a probabilistic distribution in the latent space to generate new data while allowing continuous interpolation.

📖
termini

Variational Encoder

Part of a VAE that maps input data to the parameters (mean and variance) of a Gaussian distribution in the latent space, enabling stochastic sampling during generation.

📖
termini

Variational Decoder

Component of a VAE that reconstructs original data from samples of the latent space, learning to map latent points to realistic generations.

📖
termini

KL Divergence (Kullback-Leibler)

Measure of dissimilarity between two probability distributions used as a regularization term in VAEs to constrain the latent space to follow a standard Gaussian distribution.

📖
termini

Mode Collapse

Phenomenon in GANs where the generator produces only a limited number of distinct output types, ignoring the diversity of the training dataset and artificially minimizing the adversarial loss.

📖
termini

Latent Space

Reduced-dimensional vector space where data are represented in a compact form, allowing interpolation, arithmetic, and sampling operations for generating new data.

📖
termini

Pix2Pix

Conditional GAN architecture for image-to-image translation using paired image sets, applying adversarial loss combined with L1 loss to ensure structural consistency.

📖
termini

CycleGAN

GAN architecture capable of learning translation between domains without paired image sets, using cycle consistency loss to preserve characteristics of the original image.

📖
termini

StyleGAN

Advanced GAN architecture using a mapping network and adaptive style blocks to hierarchically control visual features at different spatial scales in image generation.

📖
termini

Deep Convolutional GAN (DCGAN)

Pioneering GAN architecture exclusively using convolutional layers with specific architectural constraints like the absence of pooling and the use of batch normalization to stabilize training.

📖
termini

Wasserstein GAN (WGAN)

GAN variant using the Wasserstein distance as training metric, offering better stability and reducing mode collapse thanks to more significant gradients.

📖
termini

Reconstruction Loss

Loss function in autoencoders measuring the difference between original input and reconstructed output, typically implemented as mean squared error or binary cross-entropy.

📖
termini

Adversarial Loss

Loss function based on the zero-sum game between generator and discriminator, forcing the generator to minimize the discriminator's ability to distinguish real data from generated data.

📖
termini

Feature Matching

Regularization technique in GANs where the generator minimizes the distance between features extracted by the discriminator for real and generated data, improving training stability.

📖
termini

Instance Normalization

Normalization technique applied individually to each sample in a batch, particularly effective in style networks and GANs to decouple style from content in image generation.

📖
termini

Progressive Growing of GANs

Training strategy where the resolution of the generator and discriminator progressively increases, starting with low-resolution images and adding layers successively to achieve high-resolution generations.

🔍

Nessun risultato trovato