Can Hiring Algorithms be Impartial Decision-Makers?

AI decision-makers were believed to make more equitable hiring decisions than human decision-makers, who are influenced by their own internal biases. However, AI decision-makers are not inherently fairer. They are designed, trained and implemented by imperfect humans and can perpetuate the same discrimination they were supposed to combat. There is a variety of reasons for this. One reason is that the training data is biased, one group is under- or overrepresented, often in a manner that reflects the existing inequalities in society. Another reason is that algorithms, on their own, can find correlations between certain traits, such as sex, and a prospective employee’s chances of success at the sought-after position. Or the AI decisionmakers find a proxy, a trait that is not a protected characteristic but has the same adverse impact against a group with that characteristic. This paper will focus on AI’s effect on gender discrimination in hiring decisions and how to avert or rectify AI gender discrimination.

Focus: Bias
Source: Seton Hall University
Readability: Expert
Type: PDF Article
Open Source: No
Keywords: N/A
Learn Tags: AI and Machine Learning Bias Employment Data Collection/Data Set
Summary: From data set to audit processes, this paper offers suggestions to mitigate gender discrimination in AI hiring. It also explores the legalities around the pursuit of redress resulting from discriminatory hiring.