🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

White-Box Attack

Attack where the adversary has complete knowledge of the model architecture, its parameters, and weights, enabling targeted exploitation of vulnerabilities.

📖
용어

Fast Gradient Sign Method (FGSM)

White-box attack technique using the gradient of the loss function to generate adversarial perturbations in a single optimization step.

📖
용어

L-BFGS Attack

White-box attack method based on the limited-memory BFGS optimization algorithm to find adversarial examples with minimal perturbation.

📖
용어

DeepFool

White-box attack algorithm that computes the minimum distance to the decision boundary by linearly approximating the classifier around the sample.

📖
용어

Carlini-Wagner Attack

Sophisticated white-box attack using non-linear optimization to generate adversarial examples that are difficult to detect with minimal perturbations.

📖
용어

Jacobian-based Saliency Map Attack (JSMA)

White-box attack exploiting the Jacobian matrix to identify the most influential pixels and create targeted and imperceptible perturbations.

📖
용어

Projected Gradient Descent (PGD)

Iterative white-box attack method extending FGSM with multiple gradient descent steps and a projection to constrain perturbations.

📖
용어

Model Sensitivity Analysis

White-box technique evaluating how input variations affect model outputs to identify exploitable vulnerability points.

📖
용어

Optimal Lp Perturbation

White-box optimization problem seeking the smallest perturbation according to an Lp norm (L0, L2, or L∞) to fool the classifier.

📖
용어

Model Extraction Attack

White-box attack where the adversary accesses internal parameters to replicate or steal the full functionality of the trained model.

📖
용어

Backdoor in White-box Model

Vulnerability intentionally introduced in a white-box accessible model, activatable by specific triggers known to the attacker.

📖
용어

Gradient Inversion Attack

White-box attack reconstructing original training data by inverting the model's gradients, compromising data confidentiality.

📖
용어

Complete Evasion Method

White-box attack strategy exploiting all model knowledge to create adversarial examples guaranteeing classifier bypass.

📖
용어

Membership Inference Attack

White-box attack determining whether a specific sample was part of the training data by analyzing the model's detailed responses.

📖
용어

White-box Universal Perturbation

Single perturbation generated in white-box capable of fooling the model over a wide range of inputs thanks to complete knowledge of the classifier.

🔍

결과를 찾을 수 없습니다