🏠 Beranda
Benchmark
📊 Semua Benchmark 🦖 Dinosaurus v1 🦖 Dinosaurus v2 ✅ Aplikasi To-Do List 🎨 Halaman Bebas Kreatif 🎯 FSACB - Showcase Utama 🌍 Benchmark Terjemahan
Model
🏆 Top 10 Model 🆓 Model Gratis 📋 Semua Model ⚙️ Kilo Code
Sumber Daya
💬 Perpustakaan Prompt 📖 Glosarium AI 🔗 Tautan Berguna

Glosarium AI

Kamus lengkap Kecerdasan Buatan

162
kategori
2.032
subkategori
23.060
istilah
📖
istilah

GAN (Generative Adversarial Network)

Unsupervised learning architecture composed of two competitive neural networks that compete with each other to generate realistic synthetic data from random noise.

📖
istilah

Discriminator

Neural network in a GAN trained to distinguish real data from artificially generated data, serving as a binary classifier in the adversarial training process.

📖
istilah

Generator

Neural network in a GAN that transforms a latent noise vector into synthetic data, progressively learning to create increasingly realistic samples to fool the discriminator.

📖
istilah

VAE (Variational Autoencoder)

Generative architecture based on variational inference that learns a probabilistic distribution in the latent space to generate new data while allowing continuous interpolation.

📖
istilah

Variational Encoder

Part of a VAE that maps input data to the parameters (mean and variance) of a Gaussian distribution in the latent space, enabling stochastic sampling during generation.

📖
istilah

Variational Decoder

Component of a VAE that reconstructs original data from samples of the latent space, learning to map latent points to realistic generations.

📖
istilah

KL Divergence (Kullback-Leibler)

Measure of dissimilarity between two probability distributions used as a regularization term in VAEs to constrain the latent space to follow a standard Gaussian distribution.

📖
istilah

Mode Collapse

Phenomenon in GANs where the generator produces only a limited number of distinct output types, ignoring the diversity of the training dataset and artificially minimizing the adversarial loss.

📖
istilah

Latent Space

Reduced-dimensional vector space where data are represented in a compact form, allowing interpolation, arithmetic, and sampling operations for generating new data.

📖
istilah

Pix2Pix

Conditional GAN architecture for image-to-image translation using paired image sets, applying adversarial loss combined with L1 loss to ensure structural consistency.

📖
istilah

CycleGAN

GAN architecture capable of learning translation between domains without paired image sets, using cycle consistency loss to preserve characteristics of the original image.

📖
istilah

StyleGAN

Advanced GAN architecture using a mapping network and adaptive style blocks to hierarchically control visual features at different spatial scales in image generation.

📖
istilah

Deep Convolutional GAN (DCGAN)

Pioneering GAN architecture exclusively using convolutional layers with specific architectural constraints like the absence of pooling and the use of batch normalization to stabilize training.

📖
istilah

Wasserstein GAN (WGAN)

GAN variant using the Wasserstein distance as training metric, offering better stability and reducing mode collapse thanks to more significant gradients.

📖
istilah

Reconstruction Loss

Loss function in autoencoders measuring the difference between original input and reconstructed output, typically implemented as mean squared error or binary cross-entropy.

📖
istilah

Adversarial Loss

Loss function based on the zero-sum game between generator and discriminator, forcing the generator to minimize the discriminator's ability to distinguish real data from generated data.

📖
istilah

Feature Matching

Regularization technique in GANs where the generator minimizes the distance between features extracted by the discriminator for real and generated data, improving training stability.

📖
istilah

Instance Normalization

Normalization technique applied individually to each sample in a batch, particularly effective in style networks and GANs to decouple style from content in image generation.

📖
istilah

Progressive Growing of GANs

Training strategy where the resolution of the generator and discriminator progressively increases, starting with low-resolution images and adding layers successively to achieve high-resolution generations.

🔍

Tidak ada hasil ditemukan