🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

Poisoning Attack

Malicious attack where corrupted data is injected into the training set to degrade the performance of the federated model. The objective is to bias predictions or create specific vulnerabilities.

📖
termini

Model Inversion

Attack where an adversary attempts to reconstruct sensitive training data from model updates or predictions. This threat compromises the confidentiality of participant data in the federated system.

📖
termini

Differential Privacy

Protection framework ensuring that the presence or absence of an individual in the database does not significantly alter the results. This technique adds controlled noise to preserve participant anonymity.

📖
termini

Evasion Attack

Attack aimed at deceiving the federated model by creating specially designed inputs to provoke incorrect predictions. These attacks exploit model vulnerabilities without requiring access to training data.

📖
termini

Verification Mechanism

System for detecting abnormal participant behaviors in a federated learning environment. These mechanisms identify and isolate potentially malicious clients to maintain model integrity.

📖
termini

Client Clustering

Technique for grouping participants based on the similarity of their local data to optimize learning. This approach reduces the impact of attacks and improves global model convergence.

📖
termini

Adversarial Perturbation

Subtle and intentional modification of data to mislead the federated learning model. These perturbations can be designed to be imperceptible to humans but devastating for predictions.

📖
termini

Homomorphic Cryptography

Cryptographic technique allowing computations on encrypted data without prior decryption. This approach ensures complete confidentiality during the aggregation of federated updates.

📖
termini

Model Robustness

The federated model's ability to maintain its performance against adversarial attacks and corrupted data. Robustness is measured by the model's resistance to malicious perturbations and overfitting.

📖
termini

Inference Attack

An attack where an adversary extracts sensitive information about training data by analyzing model outputs. This threat exploits information leakage through gradients or predictions.

📖
termini

Data Sanitization

The process of cleaning and validating local data before its use in federated learning. This step eliminates anomalies and attempts to inject malicious data.

📖
termini

Secure Communication

Protocols for transferring model updates between clients and central server with end-to-end encryption. These protocols prevent interception and modification of gradients during transmission.

📖
termini

Federated Cross-Validation

A method for evaluating model performance using validation data distributed across multiple clients. This approach ensures unbiased and representative evaluation of model generalization.

📖
termini

Data Isolation

A fundamental principle of federated learning ensuring that raw data never leaves its local environment. Only model parameters or gradients are shared to preserve confidentiality.

📖
termini

Random Sampling

Probabilistic selection of participating clients for each training round to diversify learning sources. This technique reduces the risk of coordinated attacks and improves model convergence.

🔍

Nessun risultato trovato