🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Evasion Attack

Attack technique that subtly modifies input data to deceive an AI model during the inference phase, without altering human perception of the information.

📖
용어

DeepFool Attack

Iterative algorithm that calculates the minimum distance to decision boundaries to produce adversarial examples with the smallest possible perturbations.

📖
용어

Universal Adversarial Perturbation

Single perturbation capable of effectively deceiving a model across a wide range of different inputs, without requiring recalculation for each sample.

📖
용어

Transferability Attack

Exploitation of the phenomenon where adversarial examples generated against one model remain effective against other models with different architectures.

📖
용어

Lp Distance Attack

Family of attacks that measure and limit the amplitude of perturbations according to different norms (L0, L1, L2, L∞) to control their perceptibility.

📖
용어

Score-Based Attack

Black-box attack that uses the model's confidence scores to estimate gradients and build effective adversarial examples.

📖
용어

Decision-Based Attack

Extreme black-box attack that uses only the predicted output labels from the model to generate adversarial perturbations.

📖
용어

Physical Attack

Attack where adversarial perturbations are applied to physical objects to deceive AI systems in real-world conditions.

📖
용어

Attaque Zero-Day

Attaque qui exploite des vulnérabilités inconnues du système de défense, rendant les mécanismes de détection traditionnels inefficaces.

📖
용어

Attaque par Encodage

Technique qui modifie les représentations encodées des données plutôt que les données brutes pour contourner les défenses basées sur l'entrée.

📖
용어

Attaque par Transformation

Attaque qui applique des transformations géométriques (rotation, translation) aux données d'entrée pour tromper le modèle sans modifications de pixels directes.

📖
용어

Attaque EOT (Expectation over Transformation)

Technique d'optimisation qui rend les attaques robustes aux variations aléatoires en optimisant sur une distribution de transformations possibles.

📖
용어

Attaque par AutoEncodeur Adversarial

Méthode qui utilise des autoencodeurs pour générer des perturbations imperceptibles tout en préservant la sémantique originale des données.

🔍

결과를 찾을 수 없습니다