Fairness Evaluation Metrics with Confidence Intervals for Binary Protected Attributes
Examine Accuracy Parity of a Model
Examine Brier Score Parity of a Model
Examine Conditional Use Accuracy Equality of a Model
Examine Equalized Odds of a Predictive Model
Evaluate Equal Opportunity Compliance of a Predictive Model
Examine Balance for Negative Class of a Model
Examine Negative Predictive Parity of a Model
Examine Balance for the Positive Class of a Model
Examine Positive Predictive Parity of a Model
Examine Predictive Equality of a Model
Examine Statistical Parity of a Model
Examine Treatment Equality of a Model
Compute Fairness Metrics for Binary Classification
A collection of functions for computing fairness metrics for machine learning and statistical models, including confidence intervals for each metric. The package supports the evaluation of group-level fairness criterion commonly used in fairness research, particularly in healthcare for binary protected attributes. It is based on the overview of fairness in machine learning written by Gao et al (2024) <doi:10.48550/arXiv.2406.09307>.