🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Knowledge Distillation

Technique where a large diffusion model (teacher) trains a smaller and faster model (student) to reproduce its outputs, thereby reducing the computational cost of inference.

📖
용어

Progressive Distillation

Iterative distillation method where each student model learns from a slightly faster teacher model, enabling exponential acceleration of the sampling process.

📖
용어

Scheduler

Algorithm defining the sequence of noise levels and time steps for the denoising process, directly influencing the speed and quality of generation.

📖
용어

DDIM Scheduler (Denoising Diffusion Implicit Models)

Deterministic scheduler that allows generating samples with far fewer steps than a standard scheduler by modifying the stochasticity of the denoising process.

📖
용어

DPM-Solver Scheduler

High-order stochastic differential equation solver designed to accelerate diffusion model sampling, achieving high quality with very few evaluation steps.

📖
용어

One-Step Sampling

Distillation objective where the student model is trained to generate a clean output from a noisy input in a single denoising step.

📖
용어

Latent Resampling

Acceleration strategy that modifies the denoising trajectory in latent space by resampling intermediate states to reduce the total number of required steps.

📖
용어

Consistency Models

Family of generative models trained to map any point on a noise trajectory directly to the starting point of the trajectory, enabling sampling in a single step or very few steps.

📖
용어

Trajectory Distillation

A distillation variant where the student model learns to imitate the complete denoising trajectory of the teacher model across multiple steps, rather than focusing on a single step.

📖
용어

Linear Multistep Scheduler

Scheduler based on linear multistep methods, optimized for fast and stable convergence with a low number of sampling steps.

📖
용어

Token Merging Acceleration

Technique that reduces the computational complexity of the denoiser (often a U-Net Transformer) by semantically merging similar tokens at each denoising step.

📖
용어

Uniform Denoising

Sampling strategy that uses uniformly spaced timesteps, often combined with advanced solvers to maximize efficiency with a reduced number of steps.

📖
용어

Adversarial Distillation

Approach where a discriminator network is used to help the student model learn the distribution characteristics of the teacher model, improving the fidelity of fast generation.

📖
용어

Karras Scheduler

Noise scheduler that defines a smoother and more continuous noise variance, often used to improve the stability and quality of sampling with a low number of steps.

📖
용어

Accelerated Stochastic Sampling

Methods that retain a stochastic component in the denoising process while using advanced schedulers or solvers to reduce the number of steps, balancing diversity and speed.

🔍

결과를 찾을 수 없습니다