Model Robustness
Data poisoning
Attack method consisting of injecting malicious data into the training set to compromise the performance of the final model. The objective is to create backdoors or systematically degrade predictions on specific targets.
← Indietro