🏠 Trang chủ
Benchmark
📊 Tất cả benchmark 🦖 Khủng long v1 🦖 Khủng long v2 ✅ Ứng dụng To-Do List 🎨 Trang tự do sáng tạo 🎯 FSACB - Trình diễn cuối cùng 🌍 Benchmark dịch thuật
Mô hình
🏆 Top 10 mô hình 🆓 Mô hình miễn phí 📋 Tất cả mô hình ⚙️ Kilo Code
Tài nguyên
💬 Thư viện prompt 📖 Thuật ngữ AI 🔗 Liên kết hữu ích

Thuật ngữ AI

Từ điển đầy đủ về Trí tuệ nhân tạo

162
danh mục
2.032
danh mục con
23.060
thuật ngữ
📖
thuật ngữ

Attribute Inference Attack

Attack where an adversary attempts to infer sensitive attributes not present in the training data from the model's predictions. This attack exploits the implicit correlations learned by the model to reveal private information about individuals.

📖
thuật ngữ

Shadow Model Attack

Attack where the adversary trains alternative models on synthetic data to mimic the behavior of the target model. These shadow models allow generating training examples to build an effective attack classifier.

📖
thuật ngữ

Privacy Leak Quantification

Systematic methods for measuring and evaluating the amount of private information disclosed by a machine learning model. These metrics help quantify leak risks and assess the effectiveness of protection mechanisms.

📖
thuật ngữ

Adversarial Privacy Defense

Proactive defense techniques that incorporate privacy constraints directly into the model's training objective. These methods simultaneously optimize the model's performance and its resistance to inference attacks.

📖
thuật ngữ

Knowledge Distillation for Privacy

Technique where a private teacher model is used to train a public student model, transferring knowledge while masking sensitive information. This approach reduces the final model's ability to memorize specific details of the training data.

📖
thuật ngữ

Privacy-Aware Model Design

Architectural design principles integrating privacy protection mechanisms from the model design stage. This approach includes limiting model capacity, adding regularization, and designing less informative outputs.

📖
thuật ngữ

Model Extraction Attack

Attack where an adversary attempts to replicate or steal a proprietary model by querying its predictions and training a substitute model. This attack can also reveal information about the original training data.

🔍

Không tìm thấy kết quả