Model Evasion
Adversarial Examples
Inputs specially designed to deceive a machine learning model, exploiting the model's vulnerabilities to cause incorrect predictions while remaining imperceptible to humans.
← Quay lại