AI Recruitment Algorithms and the Dehumanization Problem
According to a recent survey by the HR Research Institute, as the presence of artificial intelligence (AI) becomes increasingly common in the workplace, HR professionals are worried that the use of recruitment algorithms will lead to a “dehumanization” of the hiring process. Our main goals in this paper are threefold: (i) to bring attention to this neglected issue, (ii) to clarify what exactly this concern about dehumanization might amount to, and (iii) to sketch an argument for why dehumanizing the hiring is ethically suspect. After distinguishing the use of the term “dehumanization” in this context (i.e. removing the human presence) from its more common meaning in the interdisciplinary field of dehumanization studies (i.e. conceiving of other humans as subhuman), we argue that the use of hiring algorithms may negatively impact the employee/employer relationship. We argue that there are good independent reasons to accept a substantive employee-employer relationship, as well as an applicant-employer relationship, both of which are consistent with a stakeholder theory of corporate obligations. We further argue that dehumanizing the hiring process may negatively impact these relationships because of the difference between the values of human recruiters and the values embedded in recruitment algorithms. Drawing on Nguyen’s (in: Lackey, Applied Epistemology, Oxford University Press, 2021) critique of how Twitter “gamifies communication”, we argue that replacing human recruiters with algorithms imports artificial values into the hiring process. We close by briefly considering some ways to potentially mitigate the problems posed by recruitment algorithms, along with the possibility that some difficult trade-off will need to be made.