🏠 Home
Prestatietests
📊 Alle benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List applicaties 🎨 Creatieve vrije pagina's 🎯 FSACB - Ultieme showcase 🌍 Vertaalbenchmark
Modellen
🏆 Top 10 modellen 🆓 Gratis modellen 📋 Alle modellen ⚙️ Kilo Code
Bronnen
💬 Promptbibliotheek 📖 AI-woordenlijst 🔗 Nuttige links

AI-woordenlijst

Het complete woordenboek van kunstmatige intelligentie

162
categorieën
2.032
subcategorieën
23.060
termen
📖
termen

Attribute Inference Attack

Attack where an adversary attempts to infer sensitive attributes not present in the training data from the model's predictions. This attack exploits the implicit correlations learned by the model to reveal private information about individuals.

📖
termen

Shadow Model Attack

Attack where the adversary trains alternative models on synthetic data to mimic the behavior of the target model. These shadow models allow generating training examples to build an effective attack classifier.

📖
termen

Privacy Leak Quantification

Systematic methods for measuring and evaluating the amount of private information disclosed by a machine learning model. These metrics help quantify leak risks and assess the effectiveness of protection mechanisms.

📖
termen

Adversarial Privacy Defense

Proactive defense techniques that incorporate privacy constraints directly into the model's training objective. These methods simultaneously optimize the model's performance and its resistance to inference attacks.

📖
termen

Knowledge Distillation for Privacy

Technique where a private teacher model is used to train a public student model, transferring knowledge while masking sensitive information. This approach reduces the final model's ability to memorize specific details of the training data.

📖
termen

Privacy-Aware Model Design

Architectural design principles integrating privacy protection mechanisms from the model design stage. This approach includes limiting model capacity, adding regularization, and designing less informative outputs.

📖
termen

Model Extraction Attack

Attack where an adversary attempts to replicate or steal a proprietary model by querying its predictions and training a substitute model. This attack can also reveal information about the original training data.

🔍

Geen resultaten gevonden