🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

XAI (Explainable AI)

Set of techniques and methods aimed at making the decisions of artificial intelligence systems understandable by humans, essential for trust and acceptability.

📖
terms

Interpretability

Ability of a model to present its decisions in a way understandable to humans, distinguished from transparency which concerns the understanding of the internal mechanism.

📖
terms

Post-hoc explanations

Explanation methods applied after model training without modifying its architecture, allowing to explain the predictions of black box models.

📖
terms

SHAP (SHapley Additive exPlanations)

Theoretical approach based on game theory to assign the importance of each feature in a model's prediction in an additive and coherent manner.

📖
terms

LIME (Local Interpretable Model-agnostic Explanations)

Local explanation technique that approximates the behavior of a complex model by a simple and interpretable model in the neighborhood of a specific prediction.

📖
terms

Influence factors

Specific elements (items, attributes, past behaviors) that have directly contributed to the generation of a particular recommendation in a system.

📖
terms

Counter-explanations

Explanations that justify why certain items were not recommended, helping users understand the limitations and exclusion criteria of the system.

📖
terms

Causal justifications

Explanations based on cause-effect relationships between user actions and generated recommendations, rather than simple correlations.

📖
terms

Knowledge-based approaches

Recommendation methods using ontologies or knowledge graphs to generate semantically rich and contextually relevant explanations.

📖
terms

Explanation visualizations

Interactive graphical representations that transform algorithmic justifications into intuitive visual elements to facilitate user understanding.

📖
terms

Explanation personalization

Adaptation of the content, style, and level of detail of explanations according to each user's profile, preferences, and expertise.

📖
terms

Qualitative evaluations of explanations

Evaluation methods based on user studies, interviews, and content analysis to measure the relevance and perceived usefulness of explanations.

📖
terms

Explanatory feedback

Mechanism allowing users to react to the provided explanations, thereby refining future recommendations and the quality of justifications.

📖
terms

Explanatory complexity

Measure of the cognitive difficulty required to understand an explanation, evaluating the trade-off between technical accuracy and user accessibility.

📖
terms

Algorithmic transparency

Principle of revealing the underlying mechanisms, data, and logic of a recommendation system to ensure its traceability and auditability.

📖
terms

Algorithmic trust

Level of credibility and reliability perceived by users towards a system, directly influenced by the quality and relevance of the provided explanations.

📖
terms

Intrinsic explanations

Models designed from their conception to be interpretable, natively integrating explanation capabilities unlike post-hoc approaches.

📖
terms

Explanatory association rules

Sets of logical rules (IF-THEN) that justify recommendations by showing the discovered relationships between behaviors and items.

📖
terms

Explanatory bias

Systematic distortions in generated explanations that may over-represent certain factors or minimize others, affecting the fair perception of the system.

🔍

No results found