How to prevent discriminatory outcomes in machine learning
Machine learning applications are already being used to make many life-changing decisions – such as who qualifies for a loan, and whether someone is released from prison. A new model is needed to govern how those developing and deploying machine learning can address the human rights implications of their products. This paper offers comprehensive recommendations on ways to integrate principles of non-discrimination and empathy into machine learning systems. This White Paper was written as part of the ongoing work by the Global Future Council on Human Rights; a group of leading academic, civil society and industry experts providing thought leadership on the most critical issues shaping the future of human rights.