🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Adversarial Attack

Intentional manipulation of input data to deceive an AI model and cause classification errors or unexpected behaviors. These attacks exploit the mathematical vulnerabilities of neural networks by introducing perturbations imperceptible to humans but detectable by the algorithm.

📖
terms

Ethical Robustness

Ability of an AI system to maintain its ethical principles and fair behaviors in the face of manipulation attempts or unexpected conditions. It ensures the preservation of the system's moral values even under stress or algorithmic attack.

📖
terms

Adversarial Defense

Set of techniques aimed at strengthening AI models against adversarial attacks, including adversarial training, anomaly detection, and input purification. These methods aim to maintain the functional and ethical integrity of the system in the face of subversion attempts.

📖
terms

Data Poisoning

Malicious insertion of corrupted data into the training set to compromise the model's future performance and introduce systemic biases. This technique can intentionally degrade the ethical and decision-making capabilities of the AI system.

📖
terms

Model Evasion

Attack strategy where specially crafted inputs allow bypassing the detection or classification mechanisms of an AI model. Evasion directly threatens ethical robustness by allowing the violation of established rules and moral constraints.

📖
terms

Ethical Perturbation

Subtle modification of inputs or parameters specifically aimed at compromising the ethical decision-making mechanisms of an AI system. These attacks target moral judgment layers to induce behaviors not conforming to programmed values.

📖
terms

Ethical Stability

Measure of the consistency of an AI system's ethical decisions in the face of minor variations in input or environmental conditions. Stability ensures that moral judgments remain constant and predictable despite contextual fluctuations.

📖
terms

Algorithmic Resilience

Ability of an AI system to recover and maintain its ethical performance after experiencing significant attacks or perturbations. Resilience includes self-correction and adaptation mechanisms to preserve long-term moral integrity.

📖
terms

Ethical Security

Field of AI cybersecurity specialized in protecting ethical decision-making mechanisms from manipulations and compromises. It combines cryptographic techniques, formal validation, and behavioral monitoring to ensure moral integrity.

📖
terms

Ethical Vulnerability

Weakness in the architecture or implementation of an AI system that can be exploited to violate its fundamental ethical principles. These vulnerabilities may reside in the decision-making, validation, or moral control layers of the system.

📖
terms

Robustness Testing

Systematic evaluation of an AI system's ability to maintain its ethical behaviors in the face of extreme or hostile scenarios. These tests simulate various types of attacks and perturbations to identify and correct moral weaknesses.

📖
terms

Ethical Validation

Formal process of verifying that an AI system consistently respects its ethical constraints even under adversarial conditions. Validation combines statistical tests, formal verification, and behavioral audits to ensure moral compliance.

📖
terms

Ethical Countermeasure

Proactive or reactive mechanism designed to prevent or neutralize attempts to compromise the ethical principles of an AI system. These countermeasures include anomaly detection, decisional isolation, and ethical recovery.

📖
terms

Adverse Inference

Process by which an attacker exploits vulnerabilities in an AI model to infer sensitive information or force unethical decisions. Adverse inference directly threatens the confidentiality and moral integrity of the system.

📖
terms

Distributional Robustness

Ability of an AI system to maintain its ethical performance in the face of changes in the distribution of input data or operational conditions. This robustness ensures the stability of moral decisions despite distributional drifts.

📖
terms

Extraction Attack

Technique aimed at replicating the behavior of an AI model, including its biases and ethical vulnerabilities, by systematically querying it. These attacks can reveal and exploit the moral weaknesses of the original system.

📖
terms

Ethical Certification

A formal process attesting that an AI system maintains its ethical guarantees under defined conditions, including in the face of attacks. Ethical certification validates the robustness of moral decision-making mechanisms according to recognized standards.

📖
terms

Adversarial Training

A training method where the model simultaneously learns to resist attacks and maintain its ethical principles. This approach enhances robustness by exposing the system to hostile scenarios during its learning.

🔍

No results found