🏠 Ana Sayfa
Benchmarklar
📊 Tüm Benchmarklar 🦖 Dinozor v1 🦖 Dinozor v2 ✅ To-Do List Uygulamaları 🎨 Yaratıcı Serbest Sayfalar 🎯 FSACB - Nihai Gösteri 🌍 Çeviri Benchmarkı
Modeller
🏆 En İyi 10 Model 🆓 Ücretsiz Modeller 📋 Tüm Modeller ⚙️ Kilo Code
Kaynaklar
💬 Prompt Kütüphanesi 📖 YZ Sözlüğü 🔗 Faydalı Bağlantılar

YZ Sözlüğü

Yapay Zekanın tam sözlüğü

162
kategoriler
2.032
alt kategoriler
23.060
terimler
📖
terimler

Membership Inference

Type of privacy attack where an adversary determines whether a specific data record was used in a model's training dataset, violating individuals' privacy.

📖
terimler

Inversion Attack

Attack that approximately reconstructs sensitive training data by analyzing the model's outputs, threatening the confidentiality of information used for its learning.

📖
terimler

Differential Privacy

Formal privacy framework ensuring that a model's output changes negligibly if a single individual is added to or removed from the training dataset.

📖
terimler

Gradient Masking Defense

Protection technique aimed at obscuring the model's gradients to prevent attackers from using gradient-based methods to generate effective adversarial attacks.

📖
terimler

Federated Learning

Decentralized training approach where the model is learned on local data without sharing it, reducing the risk of sensitive data leaks from a central repository.

📖
terimler

Backdoor in a Model

Vulnerability intentionally introduced into a model, often through data poisoning, that causes it to behave abnormally in the presence of a specific trigger.

📖
terimler

Model Robustness

Ability of a machine learning model to maintain its performance in the face of input data perturbations, including random noise and targeted adversarial attacks.

📖
terimler

Robustness Certification

Mathematical process providing a formal guarantee that a model cannot be fooled by input perturbations exceeding a certain defined magnitude.

📖
terimler

Transferability Attack

Phenomenon where an adversarial example, designed to deceive a specific model, also manages to mislead other models with different architectures or training data.

📖
terimler

Dataset Cleaning

Proactive process of identifying and removing potentially malicious or abnormal samples from a dataset before training to prevent poisoning attacks.

📖
terimler

Sensitivity Metric

Quantitative measure evaluating how much a model's predictions change in response to small modifications to its input data, indicating its vulnerability to attacks.

🔍

Sonuç bulunamadı