AI-ordlista
Den kompletta ordlistan över AI
Disparate Impact
Statistical measure quantifying the differential impact of an algorithmic decision on protected groups, calculated as the ratio between the selection rates of disadvantaged and favored groups.
Statistical Parity Difference
Metric evaluating the difference between positive prediction probabilities for different demographic groups, aiming to achieve perfect statistical parity when the difference is zero.
Equal Opportunity Difference
Indicator measuring the difference in true positive rates between groups, ensuring that qualified individuals have equal chances of being correctly identified regardless of their group membership.
Average Odds Difference
Metric combining the differences in true positive rates and false positive rates between groups to evaluate the overall fairness of classification predictions.
Theil Index
Inequality measure based on information theory quantifying the divergence between the distribution of predictions and a perfectly fair distribution, sensitive to systemic biases.
Jensen-Shannon Divergence
Symmetric metric measuring the dissimilarity between prediction distributions for different groups, used to detect subtle algorithmic discriminations.
Counterfactual Fairness
Fairness principle requiring that the prediction for an individual remain unchanged if their protected attributes were counterfactually modified, evaluated through sensitivity tests.
Individual Fairness Metric
Measure ensuring that similar individuals according to relevant characteristics receive equivalent algorithmic treatments, quantified by appropriate metric distances.
Group Fairness Metric
Set of statistical indicators evaluating fairness at demographic population levels rather than at individual level, including demographic parity and equalized odds.
Demographic Parity
Fairness principle requiring that positive prediction rates be identical between different demographic groups, regardless of actual individual characteristics.
Equalized Odds
Strict fairness condition requiring equality of true positive and false positive rates across all groups, ensuring uniform predictive performance.
Calibration Difference
Metric quantifying calibration gaps between groups, measuring whether predicted probability scores correspond to actual frequencies for each subpopulation.
False Positive Rate Disparity
Indicator measuring inequality of false positive rates between groups, crucial for evaluating discrimination in binary classification systems.
False Negative Rate Disparity
Metric quantifying false negative rate differences between populations, essential for detecting systemic underrepresentations in positive predictions.
Selection Rate Difference
Simple disparity measure calculating the absolute difference between group selection rates, used as an initial indicator of potential discrimination.
Mutual Information Bias
Quantification of dependence between protected attributes and model predictions, using information theory to detect discriminatory correlations.
Kolmogorov-Smirnov Test for Fairness
Non-parametric statistical test comparing the distributions of prediction scores between groups to identify significant algorithmic discriminations.
Wasserstein Distance for Fairness
Distance metric measuring the minimal effort to transform the prediction distribution of one group into that of another, quantifying overall inequality.
Entropy-Based Bias Metric
Metric using entropy to measure the uncertainty and diversity of predictions, detecting biases through analysis of the output distribution.
Consistency Score
Individual fairness metric evaluating the consistency of predictions for similar individuals, measured by the correlation between predictions and feature similarities.