AI-ordlista
Den kompletta ordlistan över AI
Defensive Distillation
Defense method training a network to learn the soft probabilities of a pre-trained model, reducing sensitivity to adversarial perturbations by smoothing the decision surface.
Obfuscated Gradients
Phenomenon where defenses intentionally or accidentally mask gradients, creating a false impression of robustness while remaining vulnerable to alternative attacks.
Gradient Shattering
Technique introducing discontinuities or oscillations in the gradient landscape to disrupt iterative optimization-based attack methods.
Gradient Regularization
Approach penalizing high gradients during training to reduce the model's sensitivity to small input perturbations and improve overall robustness.
Randomized Smoothing
Method certifying robustness by adding random noise to inputs and using Gaussian smoothing techniques to guarantee certifiability bounds against adversarial attacks.
Input Transformation
Defense applying non-differentiable or non-invertible transformations to inputs before classification, such as compression or resampling, to neutralize adversarial perturbations.
Feature Squeezing
Technique reducing input feature complexity by decreasing pixel precision or color space, thereby eliminating imperceptible perturbations used in attacks.
Non-differentiable Defense
Protection strategy integrating non-differentiable operations into the classification pipeline to prevent attackers from efficiently computing gradients.
Gradient Obfuscation
Set of techniques making gradients unusable by numerical methods, including masking, crushing, or falsifying gradient information.
Certified Defenses
Approaches providing provable mathematical guarantees on model robustness within a specified perturbation radius, avoiding false impressions of security.
Jacobian-based Saliency Map Attack Defense
Countermeasures specifically designed to neutralize Jacobian-based saliency map attacks by modifying network structure or propagation mechanisms.
PGD-based Robustness
Evaluation and improvement of robustness using Projected Gradient Descent as a reference attack to measure and optimize model resistance.
Ensemble Methods
Use of multiple models with different architectures or initializations to diversify responses and reduce the effectiveness of attacks targeting a single vulnerability.
Lipschitz Continuity
Mathematical property guaranteeing limited variation of outputs relative to inputs, used to design networks intrinsically robust to perturbations.
Provably Robust Networks
Neural architectures designed with formal constraints mathematically guaranteeing their robustness under specified perturbation conditions.
Gradient-free Optimization Attacks
Attack methods bypassing gradient masking by using gradient-free optimization approaches such as genetic algorithms or simulated annealing.
Thermometer Encoding
Input encoding technique transforming continuous features into ordered binary representations, reducing the attack surface and improving robustness.