🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Membership Inference

Type of privacy attack where an adversary determines whether a specific data record was used in a model's training dataset, violating individuals' privacy.

📖
용어

Inversion Attack

Attack that approximately reconstructs sensitive training data by analyzing the model's outputs, threatening the confidentiality of information used for its learning.

📖
용어

Differential Privacy

Formal privacy framework ensuring that a model's output changes negligibly if a single individual is added to or removed from the training dataset.

📖
용어

Gradient Masking Defense

Protection technique aimed at obscuring the model's gradients to prevent attackers from using gradient-based methods to generate effective adversarial attacks.

📖
용어

Federated Learning

Decentralized training approach where the model is learned on local data without sharing it, reducing the risk of sensitive data leaks from a central repository.

📖
용어

Backdoor in a Model

Vulnerability intentionally introduced into a model, often through data poisoning, that causes it to behave abnormally in the presence of a specific trigger.

📖
용어

Model Robustness

Ability of a machine learning model to maintain its performance in the face of input data perturbations, including random noise and targeted adversarial attacks.

📖
용어

Robustness Certification

Mathematical process providing a formal guarantee that a model cannot be fooled by input perturbations exceeding a certain defined magnitude.

📖
용어

Transferability Attack

Phenomenon where an adversarial example, designed to deceive a specific model, also manages to mislead other models with different architectures or training data.

📖
용어

Dataset Cleaning

Proactive process of identifying and removing potentially malicious or abnormal samples from a dataset before training to prevent poisoning attacks.

📖
용어

Sensitivity Metric

Quantitative measure evaluating how much a model's predictions change in response to small modifications to its input data, indicating its vulnerability to attacks.

🔍

결과를 찾을 수 없습니다