🏠 Startseite
Vergleiche
📊 Alle Benchmarks 🦖 Dinosaurier v1 🦖 Dinosaurier v2 ✅ To-Do-Listen-Apps 🎨 Kreative freie Seiten 🎯 FSACB - Ultimatives Showcase 🌍 Übersetzungs-Benchmark
Modelle
🏆 Top 10 Modelle 🆓 Kostenlose Modelle 📋 Alle Modelle ⚙️ Kilo Code
Ressourcen
💬 Prompt-Bibliothek 📖 KI-Glossar 🔗 Nützliche Links

KI-Glossar

Das vollständige Wörterbuch der Künstlichen Intelligenz

162
Kategorien
2.032
Unterkategorien
23.060
Begriffe
📖
Begriffe

Disparate Impact

Statistical measure quantifying the differential impact of an algorithmic decision on protected groups, calculated as the ratio between the selection rates of disadvantaged and favored groups.

📖
Begriffe

Statistical Parity Difference

Metric evaluating the difference between positive prediction probabilities for different demographic groups, aiming to achieve perfect statistical parity when the difference is zero.

📖
Begriffe

Equal Opportunity Difference

Indicator measuring the difference in true positive rates between groups, ensuring that qualified individuals have equal chances of being correctly identified regardless of their group membership.

📖
Begriffe

Average Odds Difference

Metric combining the differences in true positive rates and false positive rates between groups to evaluate the overall fairness of classification predictions.

📖
Begriffe

Theil Index

Inequality measure based on information theory quantifying the divergence between the distribution of predictions and a perfectly fair distribution, sensitive to systemic biases.

📖
Begriffe

Jensen-Shannon Divergence

Symmetric metric measuring the dissimilarity between prediction distributions for different groups, used to detect subtle algorithmic discriminations.

📖
Begriffe

Counterfactual Fairness

Fairness principle requiring that the prediction for an individual remain unchanged if their protected attributes were counterfactually modified, evaluated through sensitivity tests.

📖
Begriffe

Individual Fairness Metric

Measure ensuring that similar individuals according to relevant characteristics receive equivalent algorithmic treatments, quantified by appropriate metric distances.

📖
Begriffe

Group Fairness Metric

Set of statistical indicators evaluating fairness at demographic population levels rather than at individual level, including demographic parity and equalized odds.

📖
Begriffe

Demographic Parity

Fairness principle requiring that positive prediction rates be identical between different demographic groups, regardless of actual individual characteristics.

📖
Begriffe

Equalized Odds

Strict fairness condition requiring equality of true positive and false positive rates across all groups, ensuring uniform predictive performance.

📖
Begriffe

Calibration Difference

Metric quantifying calibration gaps between groups, measuring whether predicted probability scores correspond to actual frequencies for each subpopulation.

📖
Begriffe

False Positive Rate Disparity

Indicator measuring inequality of false positive rates between groups, crucial for evaluating discrimination in binary classification systems.

📖
Begriffe

False Negative Rate Disparity

Metric quantifying false negative rate differences between populations, essential for detecting systemic underrepresentations in positive predictions.

📖
Begriffe

Selection Rate Difference

Simple disparity measure calculating the absolute difference between group selection rates, used as an initial indicator of potential discrimination.

📖
Begriffe

Mutual Information Bias

Quantification of dependence between protected attributes and model predictions, using information theory to detect discriminatory correlations.

📖
Begriffe

Kolmogorov-Smirnov Test for Fairness

Non-parametric statistical test comparing the distributions of prediction scores between groups to identify significant algorithmic discriminations.

📖
Begriffe

Wasserstein Distance for Fairness

Distance metric measuring the minimal effort to transform the prediction distribution of one group into that of another, quantifying overall inequality.

📖
Begriffe

Entropy-Based Bias Metric

Metric using entropy to measure the uncertainty and diversity of predictions, detecting biases through analysis of the output distribution.

📖
Begriffe

Consistency Score

Individual fairness metric evaluating the consistency of predictions for similar individuals, measured by the correlation between predictions and feature similarities.

🔍

Keine Ergebnisse gefunden