Glossario IA
Il dizionario completo dell'Intelligenza Artificiale
Poisoning Attack
Malicious attack where corrupted data is injected into the training set to degrade the performance of the federated model. The objective is to bias predictions or create specific vulnerabilities.
Model Inversion
Attack where an adversary attempts to reconstruct sensitive training data from model updates or predictions. This threat compromises the confidentiality of participant data in the federated system.
Differential Privacy
Protection framework ensuring that the presence or absence of an individual in the database does not significantly alter the results. This technique adds controlled noise to preserve participant anonymity.
Evasion Attack
Attack aimed at deceiving the federated model by creating specially designed inputs to provoke incorrect predictions. These attacks exploit model vulnerabilities without requiring access to training data.
Verification Mechanism
System for detecting abnormal participant behaviors in a federated learning environment. These mechanisms identify and isolate potentially malicious clients to maintain model integrity.
Client Clustering
Technique for grouping participants based on the similarity of their local data to optimize learning. This approach reduces the impact of attacks and improves global model convergence.
Adversarial Perturbation
Subtle and intentional modification of data to mislead the federated learning model. These perturbations can be designed to be imperceptible to humans but devastating for predictions.
Homomorphic Cryptography
Cryptographic technique allowing computations on encrypted data without prior decryption. This approach ensures complete confidentiality during the aggregation of federated updates.
Model Robustness
The federated model's ability to maintain its performance against adversarial attacks and corrupted data. Robustness is measured by the model's resistance to malicious perturbations and overfitting.
Inference Attack
An attack where an adversary extracts sensitive information about training data by analyzing model outputs. This threat exploits information leakage through gradients or predictions.
Data Sanitization
The process of cleaning and validating local data before its use in federated learning. This step eliminates anomalies and attempts to inject malicious data.
Secure Communication
Protocols for transferring model updates between clients and central server with end-to-end encryption. These protocols prevent interception and modification of gradients during transmission.
Federated Cross-Validation
A method for evaluating model performance using validation data distributed across multiple clients. This approach ensures unbiased and representative evaluation of model generalization.
Data Isolation
A fundamental principle of federated learning ensuring that raw data never leaves its local environment. Only model parameters or gradients are shared to preserve confidentiality.
Random Sampling
Probabilistic selection of participating clients for each training round to diversify learning sources. This technique reduces the risk of coordinated attacks and improves model convergence.