🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📂
subcategories

White-Box Attacks

Attacks where the adversary has complete knowledge of the target model's architecture and parameters.

15 terms
📂
subcategories

Black-Box Attacks

Attacks performed without internal knowledge of the model, solely through interactions with its inputs/outputs.

18 terms
📂
subcategories

Evasion Attacks

Subtle perturbations of input data to deceive the model during inference.

13 terms
📂
subcategories

Poisoning Attacks

Injection of malicious data into the training set to compromise the model.

17 terms
📂
subcategories

Model Extraction Attacks

Theft of parameters or functionality of a proprietary model through repeated queries.

17 terms
📂
subcategories

Membership Inference Attacks

Determining whether a specific data point was part of the training set.

11 terms
📂
subcategories

Adversarial Training Defense

Training the model on generated adversarial examples to improve its robustness.

15 terms
📂
subcategories

Défense par Détection d'Attaques

Mécanismes pour identifier et rejeter les entrées potentiellement adversariales.

18 terms
📂
subcategories

Gradient Masking Defense

Techniques masking gradients to prevent optimization-based attacks.

17 terms
📂
subcategories

Attacks on Computer Vision

Attacks specifically designed to deceive image classification and object detection models.

8 terms
📂
subcategories

Attacks on NLP

Subtle textual perturbations to fool natural language processing models.

17 terms
📂
subcategories

Transfer Attacks

Attacks generated on a source model but effective against different target models.

16 terms
📂
subcategories

Randomization Defense

Introduction of stochasticity into the inference process to disrupt attacks.

16 terms
📂
subcategories

Attacks on Audio Models

Imperceptible sound perturbations designed to fool speech recognition systems.

20 terms
📂
subcategories

Robustness Evaluation

Metrics and benchmarks for quantifying model resistance to adversarial attacks.

17 terms
🔍

No results found