AI 용어집
인공지능 완전 사전
Intersectional Fairness
Ethical principle ensuring that AI systems do not produce combined discriminations based on the intersection of multiple protected characteristics such as gender, ethnicity, or age.
Multiple Algorithmic Bias
Phenomenon where an algorithm simultaneously presents multiple types of discriminatory biases that interact and mutually amplify during automated decision-making.
Discrimination Matrix
Analytical tool representing the interactions between different protected characteristics to identify and quantify patterns of combined discrimination in algorithmic predictions.
Intersectional Fairness Metrics
Quantitative indicators specifically designed to measure the fairness of AI systems at the level of subgroups defined by the intersection of multiple protected attributes.
Combined Disparity Analysis
Statistical methodology evaluating differences in treatment or impact between groups defined by the combination of multiple demographic or social characteristics.
Intersectional Equal Opportunity Principle
Extension of the equal opportunity principle ensuring that true positive rates are equal not only between main groups but also between their intersectional subgroups.
Intersectional Algorithmic Audit
Systematic evaluation process of algorithmic biases specifically considering discriminatory effects resulting from the intersection of multiple protected characteristics.
Multi-dimensional Distributive Justice
Theoretical framework evaluating the fairness of resource or opportunity distribution according to multiple dimensions simultaneously, thus avoiding excessive simplifications of unidimensional analyses.
Intersectional Weighting
Technique of adjusting weights in AI models to specifically compensate for biases affecting intersectional subgroups most vulnerable to combined discriminations.
Protected Features Correlation
Analysis of statistical dependencies between different protected attributes in training data, essential for understanding and mitigating emerging intersectional biases.
Contextual Debiasing
Approach to correcting algorithmic biases that considers the social and historical context of intersectional discriminations rather than treating each characteristic in isolation.
Cross-group Fairness
Evaluation criterion ensuring that algorithmic performance is equivalent across all possible intersections of groups defined by different protected characteristics.
Multi-attribute Segmentation
Technique of partitioning data into subgroups based on the simultaneous combination of multiple attributes to reveal and analyze hidden intersectional biases.
Aggregate Differential Impact
Composite measure quantifying the overall discriminatory effect of an algorithm on populations experiencing multiple forms of discrimination simultaneously based on their intersectional characteristics.
Intersectional Risk Score
Numerical indicator evaluating the probability that an individual belonging to a specific intersectional subgroup will experience algorithmic discrimination compared to other groups.
Multivariate Counterfactual Fairness
Principle ensuring that a model's predictions would remain unchanged if multiple protected characteristics were modified simultaneously, ensuring fairness robust to intersections.
Fair Multi-objective Optimization
Training paradigm for AI models that seeks to simultaneously optimize predictive performance and intersectional fairness according to several contradictory metrics.
Intersectional Significance Test
Statistical procedure determining whether observed differences between intersectional subgroups are statistically significant or result from chance in algorithmic predictions.