Słownik AI
Kompletny słownik sztucznej inteligencji
Model Extraction
Attack where an adversary recreates a machine learning model by querying the target model's API and using the responses to train a substitute model with equivalent capabilities.
Membership Inference Attack
Attack technique aimed at determining whether a specific data sample was used in a model's training dataset, thereby revealing information about private training data.
Model Inversion Attack
Attack that approximately reconstructs training data characteristics by exploiting model outputs and prediction information to reverse the learning process.
Adversarial Examples
Inputs specially designed to deceive a machine learning model, exploiting the model's vulnerabilities to cause incorrect predictions while remaining imperceptible to humans.
Data Poisoning Attack
Attack where an adversary deliberately inserts malicious data into the training dataset to compromise model performance or create exploitable backdoors.
Model Stealing
Process by which an attacker illicitly extracts or replicates a proprietary machine learning model by exploiting information accessible through its API or predictive behavior.
Property Inference Attack
Attack aimed at inferring global properties of the training dataset, such as class distributions or correlations, without directly accessing the data.
Model Watermarking
Intellectual property technique that embeds invisible markers in a machine learning model to identify and prove ownership in case of theft or unauthorized reproduction.
Gradient Leakage
Vulnerability where shared gradients during distributed or federated training can reveal sensitive information about participants' local training data.
Cryptographic Primitives
Fundamental cryptographic operations such as encryption, decryption, hash functions, and digital signatures used as building blocks to construct complex security protocols.