🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

White-Box Attack

Attack where the adversary has complete knowledge of the model architecture, its parameters, and weights, enabling targeted exploitation of vulnerabilities.

📖
termini

Fast Gradient Sign Method (FGSM)

White-box attack technique using the gradient of the loss function to generate adversarial perturbations in a single optimization step.

📖
termini

L-BFGS Attack

White-box attack method based on the limited-memory BFGS optimization algorithm to find adversarial examples with minimal perturbation.

📖
termini

DeepFool

White-box attack algorithm that computes the minimum distance to the decision boundary by linearly approximating the classifier around the sample.

📖
termini

Carlini-Wagner Attack

Sophisticated white-box attack using non-linear optimization to generate adversarial examples that are difficult to detect with minimal perturbations.

📖
termini

Jacobian-based Saliency Map Attack (JSMA)

White-box attack exploiting the Jacobian matrix to identify the most influential pixels and create targeted and imperceptible perturbations.

📖
termini

Projected Gradient Descent (PGD)

Iterative white-box attack method extending FGSM with multiple gradient descent steps and a projection to constrain perturbations.

📖
termini

Model Sensitivity Analysis

White-box technique evaluating how input variations affect model outputs to identify exploitable vulnerability points.

📖
termini

Optimal Lp Perturbation

White-box optimization problem seeking the smallest perturbation according to an Lp norm (L0, L2, or L∞) to fool the classifier.

📖
termini

Model Extraction Attack

White-box attack where the adversary accesses internal parameters to replicate or steal the full functionality of the trained model.

📖
termini

Backdoor in White-box Model

Vulnerability intentionally introduced in a white-box accessible model, activatable by specific triggers known to the attacker.

📖
termini

Gradient Inversion Attack

White-box attack reconstructing original training data by inverting the model's gradients, compromising data confidentiality.

📖
termini

Complete Evasion Method

White-box attack strategy exploiting all model knowledge to create adversarial examples guaranteeing classifier bypass.

📖
termini

Membership Inference Attack

White-box attack determining whether a specific sample was part of the training data by analyzing the model's detailed responses.

📖
termini

White-box Universal Perturbation

Single perturbation generated in white-box capable of fooling the model over a wide range of inputs thanks to complete knowledge of the classifier.

🔍

Nessun risultato trovato