🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Defensive Distillation

Defense method training a network to learn the soft probabilities of a pre-trained model, reducing sensitivity to adversarial perturbations by smoothing the decision surface.

📖
용어

Obfuscated Gradients

Phenomenon where defenses intentionally or accidentally mask gradients, creating a false impression of robustness while remaining vulnerable to alternative attacks.

📖
용어

Gradient Shattering

Technique introducing discontinuities or oscillations in the gradient landscape to disrupt iterative optimization-based attack methods.

📖
용어

Gradient Regularization

Approach penalizing high gradients during training to reduce the model's sensitivity to small input perturbations and improve overall robustness.

📖
용어

Randomized Smoothing

Method certifying robustness by adding random noise to inputs and using Gaussian smoothing techniques to guarantee certifiability bounds against adversarial attacks.

📖
용어

Input Transformation

Defense applying non-differentiable or non-invertible transformations to inputs before classification, such as compression or resampling, to neutralize adversarial perturbations.

📖
용어

Feature Squeezing

Technique reducing input feature complexity by decreasing pixel precision or color space, thereby eliminating imperceptible perturbations used in attacks.

📖
용어

Non-differentiable Defense

Protection strategy integrating non-differentiable operations into the classification pipeline to prevent attackers from efficiently computing gradients.

📖
용어

Gradient Obfuscation

Set of techniques making gradients unusable by numerical methods, including masking, crushing, or falsifying gradient information.

📖
용어

Certified Defenses

Approaches providing provable mathematical guarantees on model robustness within a specified perturbation radius, avoiding false impressions of security.

📖
용어

Jacobian-based Saliency Map Attack Defense

Countermeasures specifically designed to neutralize Jacobian-based saliency map attacks by modifying network structure or propagation mechanisms.

📖
용어

PGD-based Robustness

Evaluation and improvement of robustness using Projected Gradient Descent as a reference attack to measure and optimize model resistance.

📖
용어

Ensemble Methods

Use of multiple models with different architectures or initializations to diversify responses and reduce the effectiveness of attacks targeting a single vulnerability.

📖
용어

Lipschitz Continuity

Mathematical property guaranteeing limited variation of outputs relative to inputs, used to design networks intrinsically robust to perturbations.

📖
용어

Provably Robust Networks

Neural architectures designed with formal constraints mathematically guaranteeing their robustness under specified perturbation conditions.

📖
용어

Gradient-free Optimization Attacks

Attack methods bypassing gradient masking by using gradient-free optimization approaches such as genetic algorithms or simulated annealing.

📖
용어

Thermometer Encoding

Input encoding technique transforming continuous features into ordered binary representations, reducing the attack surface and improving robustness.

🔍

결과를 찾을 수 없습니다