🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📂
sottocategorie

White-Box Attacks

Attacks where the adversary has complete knowledge of the target model's architecture and parameters.

15 termini
📂
sottocategorie

Black-Box Attacks

Attacks performed without internal knowledge of the model, solely through interactions with its inputs/outputs.

18 termini
📂
sottocategorie

Evasion Attacks

Subtle perturbations of input data to deceive the model during inference.

13 termini
📂
sottocategorie

Poisoning Attacks

Injection of malicious data into the training set to compromise the model.

17 termini
📂
sottocategorie

Model Extraction Attacks

Theft of parameters or functionality of a proprietary model through repeated queries.

17 termini
📂
sottocategorie

Membership Inference Attacks

Determining whether a specific data point was part of the training set.

11 termini
📂
sottocategorie

Adversarial Training Defense

Training the model on generated adversarial examples to improve its robustness.

15 termini
📂
sottocategorie

Défense par Détection d'Attaques

Mécanismes pour identifier et rejeter les entrées potentiellement adversariales.

18 termini
📂
sottocategorie

Gradient Masking Defense

Techniques masking gradients to prevent optimization-based attacks.

17 termini
📂
sottocategorie

Attacks on Computer Vision

Attacks specifically designed to deceive image classification and object detection models.

8 termini
📂
sottocategorie

Attacks on NLP

Subtle textual perturbations to fool natural language processing models.

17 termini
📂
sottocategorie

Transfer Attacks

Attacks generated on a source model but effective against different target models.

16 termini
📂
sottocategorie

Randomization Defense

Introduction of stochasticity into the inference process to disrupt attacks.

16 termini
📂
sottocategorie

Attacks on Audio Models

Imperceptible sound perturbations designed to fool speech recognition systems.

20 termini
📂
sottocategorie

Robustness Evaluation

Metrics and benchmarks for quantifying model resistance to adversarial attacks.

17 termini
🔍

Nessun risultato trovato