🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Disparate Impact

Statistical measure quantifying the differential impact of an algorithmic decision on protected groups, calculated as the ratio between the selection rates of disadvantaged and favored groups.

📖
terms

Statistical Parity Difference

Metric evaluating the difference between positive prediction probabilities for different demographic groups, aiming to achieve perfect statistical parity when the difference is zero.

📖
terms

Equal Opportunity Difference

Indicator measuring the difference in true positive rates between groups, ensuring that qualified individuals have equal chances of being correctly identified regardless of their group membership.

📖
terms

Average Odds Difference

Metric combining the differences in true positive rates and false positive rates between groups to evaluate the overall fairness of classification predictions.

📖
terms

Theil Index

Inequality measure based on information theory quantifying the divergence between the distribution of predictions and a perfectly fair distribution, sensitive to systemic biases.

📖
terms

Jensen-Shannon Divergence

Symmetric metric measuring the dissimilarity between prediction distributions for different groups, used to detect subtle algorithmic discriminations.

📖
terms

Counterfactual Fairness

Fairness principle requiring that the prediction for an individual remain unchanged if their protected attributes were counterfactually modified, evaluated through sensitivity tests.

📖
terms

Individual Fairness Metric

Measure ensuring that similar individuals according to relevant characteristics receive equivalent algorithmic treatments, quantified by appropriate metric distances.

📖
terms

Group Fairness Metric

Set of statistical indicators evaluating fairness at demographic population levels rather than at individual level, including demographic parity and equalized odds.

📖
terms

Demographic Parity

Fairness principle requiring that positive prediction rates be identical between different demographic groups, regardless of actual individual characteristics.

📖
terms

Equalized Odds

Strict fairness condition requiring equality of true positive and false positive rates across all groups, ensuring uniform predictive performance.

📖
terms

Calibration Difference

Metric quantifying calibration gaps between groups, measuring whether predicted probability scores correspond to actual frequencies for each subpopulation.

📖
terms

False Positive Rate Disparity

Indicator measuring inequality of false positive rates between groups, crucial for evaluating discrimination in binary classification systems.

📖
terms

False Negative Rate Disparity

Metric quantifying false negative rate differences between populations, essential for detecting systemic underrepresentations in positive predictions.

📖
terms

Selection Rate Difference

Simple disparity measure calculating the absolute difference between group selection rates, used as an initial indicator of potential discrimination.

📖
terms

Mutual Information Bias

Quantification of dependence between protected attributes and model predictions, using information theory to detect discriminatory correlations.

📖
terms

Kolmogorov-Smirnov Test for Fairness

Non-parametric statistical test comparing the distributions of prediction scores between groups to identify significant algorithmic discriminations.

📖
terms

Wasserstein Distance for Fairness

Distance metric measuring the minimal effort to transform the prediction distribution of one group into that of another, quantifying overall inequality.

📖
terms

Entropy-Based Bias Metric

Metric using entropy to measure the uncertainty and diversity of predictions, detecting biases through analysis of the output distribution.

📖
terms

Consistency Score

Individual fairness metric evaluating the consistency of predictions for similar individuals, measured by the correlation between predictions and feature similarities.

🔍

No results found