🏠 Hem
Benchmarkar
📊 Alla benchmarkar 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List-applikationer 🎨 Kreativa fria sidor 🎯 FSACB - Ultimata uppvisningen 🌍 Översättningsbenchmark
Modeller
🏆 Topp 10 modeller 🆓 Gratis modeller 📋 Alla modeller ⚙️ Kilo Code
Resurser
💬 Promptbibliotek 📖 AI-ordlista 🔗 Användbara länkar

AI-ordlista

Den kompletta ordlistan över AI

162
kategorier
2 032
underkategorier
23 060
termer
📖
termer

Disparate Impact

Statistical measure quantifying the differential impact of an algorithmic decision on protected groups, calculated as the ratio between the selection rates of disadvantaged and favored groups.

📖
termer

Statistical Parity Difference

Metric evaluating the difference between positive prediction probabilities for different demographic groups, aiming to achieve perfect statistical parity when the difference is zero.

📖
termer

Equal Opportunity Difference

Indicator measuring the difference in true positive rates between groups, ensuring that qualified individuals have equal chances of being correctly identified regardless of their group membership.

📖
termer

Average Odds Difference

Metric combining the differences in true positive rates and false positive rates between groups to evaluate the overall fairness of classification predictions.

📖
termer

Theil Index

Inequality measure based on information theory quantifying the divergence between the distribution of predictions and a perfectly fair distribution, sensitive to systemic biases.

📖
termer

Jensen-Shannon Divergence

Symmetric metric measuring the dissimilarity between prediction distributions for different groups, used to detect subtle algorithmic discriminations.

📖
termer

Counterfactual Fairness

Fairness principle requiring that the prediction for an individual remain unchanged if their protected attributes were counterfactually modified, evaluated through sensitivity tests.

📖
termer

Individual Fairness Metric

Measure ensuring that similar individuals according to relevant characteristics receive equivalent algorithmic treatments, quantified by appropriate metric distances.

📖
termer

Group Fairness Metric

Set of statistical indicators evaluating fairness at demographic population levels rather than at individual level, including demographic parity and equalized odds.

📖
termer

Demographic Parity

Fairness principle requiring that positive prediction rates be identical between different demographic groups, regardless of actual individual characteristics.

📖
termer

Equalized Odds

Strict fairness condition requiring equality of true positive and false positive rates across all groups, ensuring uniform predictive performance.

📖
termer

Calibration Difference

Metric quantifying calibration gaps between groups, measuring whether predicted probability scores correspond to actual frequencies for each subpopulation.

📖
termer

False Positive Rate Disparity

Indicator measuring inequality of false positive rates between groups, crucial for evaluating discrimination in binary classification systems.

📖
termer

False Negative Rate Disparity

Metric quantifying false negative rate differences between populations, essential for detecting systemic underrepresentations in positive predictions.

📖
termer

Selection Rate Difference

Simple disparity measure calculating the absolute difference between group selection rates, used as an initial indicator of potential discrimination.

📖
termer

Mutual Information Bias

Quantification of dependence between protected attributes and model predictions, using information theory to detect discriminatory correlations.

📖
termer

Kolmogorov-Smirnov Test for Fairness

Non-parametric statistical test comparing the distributions of prediction scores between groups to identify significant algorithmic discriminations.

📖
termer

Wasserstein Distance for Fairness

Distance metric measuring the minimal effort to transform the prediction distribution of one group into that of another, quantifying overall inequality.

📖
termer

Entropy-Based Bias Metric

Metric using entropy to measure the uncertainty and diversity of predictions, detecting biases through analysis of the output distribution.

📖
termer

Consistency Score

Individual fairness metric evaluating the consistency of predictions for similar individuals, measured by the correlation between predictions and feature similarities.

🔍

Inga resultat hittades