Algorithmic Bias in Current Hiring Practices: An Ethical Examination

This paper will explore the ethical consequences of using machine learning algorithms in hiring decisions, focusing on the risk of discriminating groups of people based on unjust criteria. The first section of the paper is concerned with describing the automated processes involved in current hiring practices and three sources of possible unjust discrimination: (i) the defined outputs of the algorithms involved; (ii) the way in which the predicted work performance is understood by managers; (iii) statistical correlations could be biased against certain groups of people, precluding the evaluation of individuals based on their own work performance. The second section of the paper offers a comparison between traditional cases of discrimination and this new kind of algorithmic discrimination and three solutions for mitigating the risk of discrimination in automated hiring practices, i.e., transparency, careful testing for biases that could have ingrained themselves in the software used in the hiring process, and by ensuring that the final decision is made by a human and not a machine.

Focus: AI Ethics/Policy
Source: International Management Conference 2019
Readability: Expert
Type: PDF Article
Open Source: No
Keywords: N/A
Learn Tags: Bias AI and Machine Learning Employment Ethics
Summary: This paper examines the ethical issues around automated recruitment practices and the risks of algorithmic biases involved. The authors argue that these risks should be taken seriously and that the risks are not inevitable but can be mitigated with effective audit and transparency.