Adversarial Attacks and Defenses
Universal Adversarial Perturbations Attack
An attack aimed at finding a single perturbation (image or noise) that can fool a model across a wide range of inputs, regardless of their specific content.
← Indietro