🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Intersectional Fairness

Ethical principle ensuring that AI systems do not produce combined discriminations based on the intersection of multiple protected characteristics such as gender, ethnicity, or age.

📖
용어

Multiple Algorithmic Bias

Phenomenon where an algorithm simultaneously presents multiple types of discriminatory biases that interact and mutually amplify during automated decision-making.

📖
용어

Discrimination Matrix

Analytical tool representing the interactions between different protected characteristics to identify and quantify patterns of combined discrimination in algorithmic predictions.

📖
용어

Intersectional Fairness Metrics

Quantitative indicators specifically designed to measure the fairness of AI systems at the level of subgroups defined by the intersection of multiple protected attributes.

📖
용어

Combined Disparity Analysis

Statistical methodology evaluating differences in treatment or impact between groups defined by the combination of multiple demographic or social characteristics.

📖
용어

Intersectional Equal Opportunity Principle

Extension of the equal opportunity principle ensuring that true positive rates are equal not only between main groups but also between their intersectional subgroups.

📖
용어

Intersectional Algorithmic Audit

Systematic evaluation process of algorithmic biases specifically considering discriminatory effects resulting from the intersection of multiple protected characteristics.

📖
용어

Multi-dimensional Distributive Justice

Theoretical framework evaluating the fairness of resource or opportunity distribution according to multiple dimensions simultaneously, thus avoiding excessive simplifications of unidimensional analyses.

📖
용어

Intersectional Weighting

Technique of adjusting weights in AI models to specifically compensate for biases affecting intersectional subgroups most vulnerable to combined discriminations.

📖
용어

Protected Features Correlation

Analysis of statistical dependencies between different protected attributes in training data, essential for understanding and mitigating emerging intersectional biases.

📖
용어

Contextual Debiasing

Approach to correcting algorithmic biases that considers the social and historical context of intersectional discriminations rather than treating each characteristic in isolation.

📖
용어

Cross-group Fairness

Evaluation criterion ensuring that algorithmic performance is equivalent across all possible intersections of groups defined by different protected characteristics.

📖
용어

Multi-attribute Segmentation

Technique of partitioning data into subgroups based on the simultaneous combination of multiple attributes to reveal and analyze hidden intersectional biases.

📖
용어

Aggregate Differential Impact

Composite measure quantifying the overall discriminatory effect of an algorithm on populations experiencing multiple forms of discrimination simultaneously based on their intersectional characteristics.

📖
용어

Intersectional Risk Score

Numerical indicator evaluating the probability that an individual belonging to a specific intersectional subgroup will experience algorithmic discrimination compared to other groups.

📖
용어

Multivariate Counterfactual Fairness

Principle ensuring that a model's predictions would remain unchanged if multiple protected characteristics were modified simultaneously, ensuring fairness robust to intersections.

📖
용어

Fair Multi-objective Optimization

Training paradigm for AI models that seeks to simultaneously optimize predictive performance and intersectional fairness according to several contradictory metrics.

📖
용어

Intersectional Significance Test

Statistical procedure determining whether observed differences between intersectional subgroups are statistically significant or result from chance in algorithmic predictions.

🔍

결과를 찾을 수 없습니다