AI-ordlista
Den kompletta ordlistan över AI
Transfer Robustness
Ability of a model to resist adversarial attacks originally designed for other architectures, measuring the generalization of defenses against unknown threats.
Source Model
Reference model used to generate adversarial attacks in the context of transfer attacks, serving as a starting point to compromise target models.
Target Model
Victim model targeted by a transfer attack, whose vulnerabilities are exploited through perturbations generated on a distinct source model.
Attack Space
Mathematical domain defining all possible perturbations that can be applied to input data to compromise a model, crucial for evaluating the transferability of attacks.
Attack Generalization
Property of an adversarial attack to maintain its effectiveness across multiple models or instances of the same model, regardless of their specific architecture or parameters.
Ensemble Method
Attack strategy combining multiple source models to generate more robust and transferable perturbations, significantly increasing the success rate against unknown targets.
No-Box Attack
Extreme type of transfer attack where the attacker has no information about the target model, relying solely on the universal transferability of perturbations.
Norm Distance
Mathematical measure (L0, L1, L2, L∞) quantifying the amplitude of adversarial perturbations, essential for evaluating transferability while maintaining the imperceptibility of the attack.
Transferability Bound
Upper theoretical limit quantifying the maximum success rate of a transfer attack between two models, based on their structural and functional similarities.
Model Diversity
Measure of variation between architectures, parameters, and training datasets of different models, directly influencing the transferability of adversarial attacks.
Gradient Alignment
Directional similarity between gradients of different models, serving as a predictive indicator of potential attack transferability between these models.
Transfer Defense
Defensive approach exploiting the transferability of attacks to develop robust protections functioning against unknown and future attack models.
Model Evasion
Objective of transfer attacks consisting of bypassing a model's detection or classification mechanisms without being detected, by exploiting its generalized weaknesses.
Decision Boundary
Mathematical frontier separating different prediction classes of a model, whose similarity between models determines the potential success of transfer attacks.
Transfer Sensitivity
Quantitative measure of shared vulnerability between different models facing the same adversarial perturbations, revealing systemic weaknesses in machine learning.
Transferability Metric
Quantitative indicator evaluating the probability of success of an attack generated on a source model to compromise a target model, based on structural or behavioral similarities.