🏠 Ana Sayfa
Benchmarklar
📊 Tüm Benchmarklar 🦖 Dinozor v1 🦖 Dinozor v2 ✅ To-Do List Uygulamaları 🎨 Yaratıcı Serbest Sayfalar 🎯 FSACB - Nihai Gösteri 🌍 Çeviri Benchmarkı
Modeller
🏆 En İyi 10 Model 🆓 Ücretsiz Modeller 📋 Tüm Modeller ⚙️ Kilo Code
Kaynaklar
💬 Prompt Kütüphanesi 📖 YZ Sözlüğü 🔗 Faydalı Bağlantılar

YZ Sözlüğü

Yapay Zekanın tam sözlüğü

162
kategoriler
2.032
alt kategoriler
23.060
terimler
📖
terimler

Diffusion Transformer

Hybrid architecture integrating multi-head attention mechanisms into the iterative diffusion process to enhance the overall coherence of generated data.

📖
terimler

U-ViT

Variant of Vision Transformer where U-Net connections are integrated to effectively combine multi-scale features in diffusion models.

📖
terimler

DiT (Diffusion Transformer)

Architecture replacing traditional U-Net convolutions with Transformer blocks in the diffusion process, using time embeddings for conditionality.

📖
terimler

Latent Diffusion Transformer

Model applying Transformer mechanisms in compressed latent space, reducing computational complexity while preserving generative quality.

📖
terimler

Cross-Attention Diffusion

Mechanism allowing diffusion models to align with external conditions via cross-attention layers between noise and conditional embeddings.

📖
terimler

Transformer Denoiser

Transformer-based module responsible for predicting noise at each denoising step in the forward-backward diffusion process.

📖
terimler

Patch Diffusion

Technique where data is divided into patches processed by Transformer attention mechanisms before the iterative diffusion process.

📖
terimler

Adaptive Layer Normalization

Normalization method conditioned by time embeddings in Diffusion-Transformer architectures to stabilize training.

📖
terimler

Self-Attention Noise Prediction

Use of self-attention to model long-distance dependencies in noise prediction during the diffusion process.

📖
terimler

Transformer Score Matching

Application of Transformer architectures to estimate the log-density gradient (score) in score-based diffusion models.

📖
terimler

Multi-Scale Transformer Diffusion

Hierarchical approach using Transformers at different scales to capture both fine details and global structure in generation.

📖
terimler

Conditional Diffusion Transformer

Architecture integrating conditions (text, images, classes) through attention mechanisms in the Transformer diffusion process.

📖
terimler

Rotary Position Embedding in Diffusion

Positional encoding technique applied to Transformer diffusion models to better capture spatial relationships in structured data.

📖
terimler

Diffusion-Guided Transformer

Model where the diffusion process guides the Transformer's attention to improve coherence and quality of structured generations.

📖
terimler

Sparse Transformer Diffusion

Variant using sparse attention mechanisms to reduce computational complexity in high-resolution diffusion models.

📖
terimler

Transformer Latent Space Diffusion

Diffusion process applied in the latent space learned by a Transformer autoencoder for efficient generation of structured data.

📖
terimler

Diffusion-Aware Self-Attention

Modified self-attention mechanism that accounts for the current noise level in the iterative diffusion process.

📖
terimler

Hierarchical Transformer Diffusion

Multi-level architecture where Transformers progressively generate increasingly refined representations through diffusion.

🔍

Sonuç bulunamadı