Model Robustness
Universal Adversarial Attacks
Type of attack where a single perturbation can effectively fool a model across a wide range of different inputs. These attacks are particularly dangerous because they don't require computing a specific perturbation for each sample.
← Indietro