🏠 Strona Główna
Benchmarki
📊 Wszystkie benchmarki 🦖 Dinozaur v1 🦖 Dinozaur v2 ✅ Aplikacje To-Do List 🎨 Kreatywne wolne strony 🎯 FSACB - Ostateczny pokaz 🌍 Benchmark tłumaczeń
Modele
🏆 Top 10 modeli 🆓 Darmowe modele 📋 Wszystkie modele ⚙️ Kilo Code
Zasoby
💬 Biblioteka promptów 📖 Słownik AI 🔗 Przydatne linki

Słownik AI

Kompletny słownik sztucznej inteligencji

162
kategorie
2 032
podkategorie
23 060
pojęcia
📖
pojęcia

White-Box Attack

Attack where the adversary has complete knowledge of the model architecture, its parameters, and weights, enabling targeted exploitation of vulnerabilities.

📖
pojęcia

Fast Gradient Sign Method (FGSM)

White-box attack technique using the gradient of the loss function to generate adversarial perturbations in a single optimization step.

📖
pojęcia

L-BFGS Attack

White-box attack method based on the limited-memory BFGS optimization algorithm to find adversarial examples with minimal perturbation.

📖
pojęcia

DeepFool

White-box attack algorithm that computes the minimum distance to the decision boundary by linearly approximating the classifier around the sample.

📖
pojęcia

Carlini-Wagner Attack

Sophisticated white-box attack using non-linear optimization to generate adversarial examples that are difficult to detect with minimal perturbations.

📖
pojęcia

Jacobian-based Saliency Map Attack (JSMA)

White-box attack exploiting the Jacobian matrix to identify the most influential pixels and create targeted and imperceptible perturbations.

📖
pojęcia

Projected Gradient Descent (PGD)

Iterative white-box attack method extending FGSM with multiple gradient descent steps and a projection to constrain perturbations.

📖
pojęcia

Model Sensitivity Analysis

White-box technique evaluating how input variations affect model outputs to identify exploitable vulnerability points.

📖
pojęcia

Optimal Lp Perturbation

White-box optimization problem seeking the smallest perturbation according to an Lp norm (L0, L2, or L∞) to fool the classifier.

📖
pojęcia

Model Extraction Attack

White-box attack where the adversary accesses internal parameters to replicate or steal the full functionality of the trained model.

📖
pojęcia

Backdoor in White-box Model

Vulnerability intentionally introduced in a white-box accessible model, activatable by specific triggers known to the attacker.

📖
pojęcia

Gradient Inversion Attack

White-box attack reconstructing original training data by inverting the model's gradients, compromising data confidentiality.

📖
pojęcia

Complete Evasion Method

White-box attack strategy exploiting all model knowledge to create adversarial examples guaranteeing classifier bypass.

📖
pojęcia

Membership Inference Attack

White-box attack determining whether a specific sample was part of the training data by analyzing the model's detailed responses.

📖
pojęcia

White-box Universal Perturbation

Single perturbation generated in white-box capable of fooling the model over a wide range of inputs thanks to complete knowledge of the classifier.

🔍

Nie znaleziono wyników