🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Fair resampling

Preprocessing technique that modifies the training data distribution by over-representing minority or underrepresented groups to reduce algorithmic disparities in model predictions.

📖
용어

Inverse probability weighting

Bias correction method assigning weights to training examples inversely proportional to their frequency in the population, thus compensating for demographic group imbalance.

📖
용어

Adversarial learning for fairness

Approach of simultaneously training a main predictor and an adversary trying to predict sensitive attributes, forcing the main model to generate representations invariant to protected characteristics.

📖
용어

Fair prediction calibration

Post-processing technique adjusting prediction scores to ensure predicted probabilities correspond consistently to observed frequencies across different demographic groups.

📖
용어

Constrained fairness optimization

Training method incorporating mathematical constraints on fairness metrics directly into the objective function, ensuring fairness criteria are met during model optimization.

📖
용어

Optimized equalized odds

Processing technique ensuring equal true positive rates between groups while maximizing overall performance, often implemented through specific loss functions or threshold adjustments.

📖
용어

Adjusted demographic parity

Correction method ensuring positive predictions are distributed proportionally across different demographic groups, regardless of their intrinsic characteristics.

📖
용어

Causal debiasing

Approach using causal graphs to identify and neutralize causal paths introducing bias, preserving only relevant relationships for the prediction task.

📖
용어

Group invariance learning

Training technique forcing the model to learn representations invariant to variations between demographic groups while preserving information relevant to the main task.

📖
용어

Post-hoc correction through adaptive thresholds

Method applied after training that dynamically adjusts decision thresholds by group to balance performance metrics and ensure fairness in final predictions.

📖
용어

Disparity reduction through reweighting

Preprocessing technique that recalculates the weights of training instances to minimize statistical divergence between the observed distribution and a fair target distribution.

📖
용어

Fair feature masking

Processing strategy that selectively masks or transforms potentially biased features during training to force the model to rely on non-discriminatory attributes.

📖
용어

Selection bias correction

Set of techniques identifying and compensating for distortions introduced by non-random sampling processes that systematically favor certain subgroups of the population.

📖
용어

Robust learning against fairness attacks

Training methodology incorporating adversarial examples designed to amplify biases, thereby strengthening the model's resistance against manipulations aimed at degrading its fairness.

📖
용어

Debiasing through counterfactuals

Technique generating counterfactual examples by modifying sensitive attributes to train the model to produce predictions invariant to changes in these protected characteristics.

📖
용어

Distribution balancing through optimal transport

Advanced method using optimal transport theory to transform the data distribution of a minority group to bring it closer to that of the majority group, thereby reducing systemic biases.

📖
용어

Fair regularization by divergence

Training technique adding a penalty term based on divergence measures (KL, JS, Wasserstein) between the prediction distributions of different groups to ensure statistical fairness.

🔍

결과를 찾을 수 없습니다