A Survey on Bias and Fairness in Machine Learning
With the widespread use of AI systems and applications in our everyday lives, it is important to take fairnessissues into consideration while designing and engineering these types of systems. Such systems can be used inmany sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure thatthe decisions do not reflect discriminatory behavior toward certain groups or populations. We have recentlyseen work in machine learning, natural language processing, and deep learning that addresses such challengesin different subdomains. With the commercialization of these systems, researchers are becoming aware of thebiases that these applications can contain and have attempted to address them. In this survey we investigateddifferent real-world applications that have shown biases in various ways, and we listed different sources ofbiases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learningresearchers have defined in order to avoid the existing bias in AI systems. In addition to that, we examineddifferent domains and subdomains in AI showing what researchers have observed with regard to unfair outcomesin the state-of-the-art methods and how they have tried to address them. There are still many future directionsand solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey willmotivate researchers to tackle these issues in the near future by observing existing work in their respective field