Decoupled Classifiers for Group-Fair and Efficient Machine Learning
When it is ethical and legal to use a sensitive attribute (such as gender or race) in machine learning systems, the question remains how to do so. We show that the naive application of machine learning algorithms using sensitive attributes leads to an inherent trade off in accuracy between groups. We provide a simple and efficient decoupling technique, which can be added on top of any black-box machine learning algorithm, to learn different classifiers for different groups. Transfer learning is used to mitigate the problem of having too little data on any one group.
Focus: Methods or Design
Source: FAT 2018
Readability: Expert
Type: Website Article
Open Source: No
External URL: https://arxiv.org/abs/1707.06613
Keywords: N/A
Learn Tags: Bias Data Tools Design/Methods Ethics Fairness Framework
Summary: A decoupling technique used to minimize or avoid bias and unfairness that can be added to any black-box machine learning algorithm to learn different classifier from different groups.