🏠 Ana Sayfa
Benchmarklar
📊 Tüm Benchmarklar 🦖 Dinozor v1 🦖 Dinozor v2 ✅ To-Do List Uygulamaları 🎨 Yaratıcı Serbest Sayfalar 🎯 FSACB - Nihai Gösteri 🌍 Çeviri Benchmarkı
Modeller
🏆 En İyi 10 Model 🆓 Ücretsiz Modeller 📋 Tüm Modeller ⚙️ Kilo Code
Kaynaklar
💬 Prompt Kütüphanesi 📖 YZ Sözlüğü 🔗 Faydalı Bağlantılar

YZ Sözlüğü

Yapay Zekanın tam sözlüğü

162
kategoriler
2.032
alt kategoriler
23.060
terimler
📖
terimler

White-Box Attack

Attack where the adversary has complete knowledge of the model architecture, its parameters, and weights, enabling targeted exploitation of vulnerabilities.

📖
terimler

Fast Gradient Sign Method (FGSM)

White-box attack technique using the gradient of the loss function to generate adversarial perturbations in a single optimization step.

📖
terimler

L-BFGS Attack

White-box attack method based on the limited-memory BFGS optimization algorithm to find adversarial examples with minimal perturbation.

📖
terimler

DeepFool

White-box attack algorithm that computes the minimum distance to the decision boundary by linearly approximating the classifier around the sample.

📖
terimler

Carlini-Wagner Attack

Sophisticated white-box attack using non-linear optimization to generate adversarial examples that are difficult to detect with minimal perturbations.

📖
terimler

Jacobian-based Saliency Map Attack (JSMA)

White-box attack exploiting the Jacobian matrix to identify the most influential pixels and create targeted and imperceptible perturbations.

📖
terimler

Projected Gradient Descent (PGD)

Iterative white-box attack method extending FGSM with multiple gradient descent steps and a projection to constrain perturbations.

📖
terimler

Model Sensitivity Analysis

White-box technique evaluating how input variations affect model outputs to identify exploitable vulnerability points.

📖
terimler

Optimal Lp Perturbation

White-box optimization problem seeking the smallest perturbation according to an Lp norm (L0, L2, or L∞) to fool the classifier.

📖
terimler

Model Extraction Attack

White-box attack where the adversary accesses internal parameters to replicate or steal the full functionality of the trained model.

📖
terimler

Backdoor in White-box Model

Vulnerability intentionally introduced in a white-box accessible model, activatable by specific triggers known to the attacker.

📖
terimler

Gradient Inversion Attack

White-box attack reconstructing original training data by inverting the model's gradients, compromising data confidentiality.

📖
terimler

Complete Evasion Method

White-box attack strategy exploiting all model knowledge to create adversarial examples guaranteeing classifier bypass.

📖
terimler

Membership Inference Attack

White-box attack determining whether a specific sample was part of the training data by analyzing the model's detailed responses.

📖
terimler

White-box Universal Perturbation

Single perturbation generated in white-box capable of fooling the model over a wide range of inputs thanks to complete knowledge of the classifier.

🔍

Sonuç bulunamadı