🏠 Startseite
Vergleiche
📊 Alle Benchmarks 🦖 Dinosaurier v1 🦖 Dinosaurier v2 ✅ To-Do-Listen-Apps 🎨 Kreative freie Seiten 🎯 FSACB - Ultimatives Showcase 🌍 Übersetzungs-Benchmark
Modelle
🏆 Top 10 Modelle 🆓 Kostenlose Modelle 📋 Alle Modelle ⚙️ Kilo Code
Ressourcen
💬 Prompt-Bibliothek 📖 KI-Glossar 🔗 Nützliche Links

KI-Glossar

Das vollständige Wörterbuch der Künstlichen Intelligenz

162
Kategorien
2.032
Unterkategorien
23.060
Begriffe
📂
Unterkategorien

White-Box Attacks

Attacks where the adversary has complete knowledge of the target model's architecture and parameters.

15 Begriffe
📂
Unterkategorien

Black-Box Attacks

Attacks performed without internal knowledge of the model, solely through interactions with its inputs/outputs.

18 Begriffe
📂
Unterkategorien

Evasion Attacks

Subtle perturbations of input data to deceive the model during inference.

13 Begriffe
📂
Unterkategorien

Poisoning Attacks

Injection of malicious data into the training set to compromise the model.

17 Begriffe
📂
Unterkategorien

Model Extraction Attacks

Theft of parameters or functionality of a proprietary model through repeated queries.

17 Begriffe
📂
Unterkategorien

Membership Inference Attacks

Determining whether a specific data point was part of the training set.

11 Begriffe
📂
Unterkategorien

Adversarial Training Defense

Training the model on generated adversarial examples to improve its robustness.

15 Begriffe
📂
Unterkategorien

Défense par Détection d'Attaques

Mécanismes pour identifier et rejeter les entrées potentiellement adversariales.

18 Begriffe
📂
Unterkategorien

Gradient Masking Defense

Techniques masking gradients to prevent optimization-based attacks.

17 Begriffe
📂
Unterkategorien

Attacks on Computer Vision

Attacks specifically designed to deceive image classification and object detection models.

8 Begriffe
📂
Unterkategorien

Attacks on NLP

Subtle textual perturbations to fool natural language processing models.

17 Begriffe
📂
Unterkategorien

Transfer Attacks

Attacks generated on a source model but effective against different target models.

16 Begriffe
📂
Unterkategorien

Randomization Defense

Introduction of stochasticity into the inference process to disrupt attacks.

16 Begriffe
📂
Unterkategorien

Attacks on Audio Models

Imperceptible sound perturbations designed to fool speech recognition systems.

20 Begriffe
📂
Unterkategorien

Robustness Evaluation

Metrics and benchmarks for quantifying model resistance to adversarial attacks.

17 Begriffe
🔍

Keine Ergebnisse gefunden