🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Fair prediction

Statistical paradigm combining conditional parity and calibration to ensure fair predictions between groups. Fair prediction aims to balance inherent trade-offs between different fairness criteria in predictive models.

📖
terms

Prediction calibration

Statistical property ensuring that prediction scores accurately reflect true probabilities for all groups. Perfect calibration ensures that a 70% score corresponds to exactly 70% positive outcomes, regardless of the group considered.

📖
terms

Algorithmic counterfactual fairness

Fairness approach examining decisions that would have been made if protected characteristics were different. Counterfactual fairness evaluates whether similar individuals would receive equivalent outcomes by changing only their demographic attributes.

📖
terms

False positive rate parity

Fairness criterion requiring all demographic groups to have equivalent type I error rates. This metric ensures that no group systematically experiences more false accusations or unjustified rejections.

📖
terms

Individual fairness

Ethical principle stating that similar individuals should receive similar treatments, regardless of their group membership. Individual fairness contrasts with group fairness by focusing on specific cases rather than statistical aggregates.

📖
terms

Bias mitigation

Set of mathematical techniques aimed at correcting systematic disparities in data and predictive models. Mitigation includes pre-processing, in-processing, and post-processing methods to achieve algorithmic fairness.

📖
terms

Indirect discrimination

Form of algorithmic discrimination where proxy variables illegitimately substitute explicitly excluded protected characteristics. Indirect discrimination emerges when statistical correlations reproduce inequalities without directly using sensitive attributes.

🔍

No results found