Automated decision-making systems commonly determine criminal sentences, hiring choices, and loan applications. This widespread deployment is concerning, since these systems have the potential to discriminate against people based on their demographic characteristics. Current sentencing risk assessments are racially biased, and job advertisements discriminate on gender. These concerns have led to the growth in fairness-aware machine learning, a field that aims to enable algorithmic systems that are fair by design. To design fair systems, researchers must agree on what it means to be fair. Researchers introduce a framework for understanding these different definitions of fairness and how they relate to each other. The framework shows the definitions of fairness and their implementations correspond to different axiomatic beliefs about the world.
ASJC Scopus subject areas
- Computer Science(all)