Project Insights Report
An Equity Lens on Artificial Intelligence
An Equity Lens on Artificial Intelligence
Executive Summary
Artificial intelligence (AI) is altering the way we work. It has significant implications for the skills development ecosystem. As AI becomes increasingly embedded in people’s lives, questions are arising regarding its impact on equity and equality.
On the one hand, given that AI mimics, to some degree, the way the human brain works, there is a risk that it will only reinforce underlying biases, discrimination and inequities by reproducing existing human biases.
On the other hand, as AI uses statistical prediction methods that can be audited, it has the potential to reduce human bias, thereby benefitting marginalized groups.
However, research from this Future Skills Centre-funded project points to ways to mitigate these risks and ensure AI’s role in decision-making processes reduces inequity and inequality, notably by (1) focusing on initiatives for equitable representation in AI; (2) prioritizing the alignment of AI with social values such as fairness; (3) creating policies for AI that prioritize accountability and transparency; and (4) ensuring re-skilling and/or upskilling programs address distributional impacts of AI.
Key Insights
AI has the potential to help address inequity and inequality where human decisions may be clouded by inherent biases.
The development and deployment of AI in skills development ecosystems must be done in a manner that prioritizes equity, or it risks exacerbating existing inequalities.
Upskilling and re-skilling policies are needed to support marginalized workers disproportionately impacted by job loss and changing skill requirements as a result of AI adoption.
The Issue
AI is already pervasive in our society and promises to transform many aspects of work and skills development.
AI has many potential benefits. Ideally, it removes the possible impacts of human error by making accurate predictions and assisting humans with decision-making. For example, in workplaces, AI can reduce the role of bias in hiring; used in health care, it can help diagnose diseases, such as dementia, and identify treatments; for financial institutions, it can predict likelihood of people defaulting on mortgages; for governments, it can support assessment of refugee cases for more just outcomes for claimants.
Conversely, technology and AI systems are not neutral or objective but exist in a social and historical context that can marginalize certain groups, including women and racialized and low-income communities. This may occur through biases or gaps in the data used to train algorithms, as well as through the implementation of AI-powered products and services that reinforce stereotypes, marginalization, and global power relations.
Strategies to contain or mitigate the negative effects of AI are urgently needed to ensure that societies will be able to reap the benefits of technological change.

What We Investigated
This paper explores the reasons behind AI’s impact on equity and quality (the ‘why’) and the ways in which AI influences these issues in practice (the ‘how’), offering insights for leaders, policymakers, and students on navigating current and future AI developments.
In particular, it investigates how societal values and biases are themselves embedded in the algorithms that power AI and examines the potential disproportionate impact of AI on women and other marginalized communities in the labour market.
At the same time, it explores how AI can help to overcome human bias and the steps that are needed to ensure that the deployment of AI leads to an improvement in equity and equality.
What We’re Learning
AI may be able to address inequity and inequality. AI has the potential to improve outcomes for people across all sectors. Ideally, it will remove the possible impacts of human error by making accurate predictions and assisting humans with decision-making.
AI may also inequitably impact already-marginalized people. Technology and AI systems are not neutral or objective and, as such, there are many examples of the ways in which AI systems can reproduce existing biases and marginalization. Indeed, the data on which some AI is built contains such biases.
Training AI to adhere to social values is complex. It will be important to embed fairness and equity in algorithms. This will not be easy, as it is unclear how algorithms can be trained to adhere to social norms and values, many of which involve intricate structures, such as law or culture.
Accountability needs to be built into the system. We need to improve transparency around the scope of AI’s impacts as well as who is responsible for creating and mitigating impacts, e.g., by introducing more assessments and audits on what AI-driven products and services mean for people, including evaluations on how fair they are.
Training and re-employment will be needed to make up for inequitable job losses. Training and re-employment programs need to address the inequitable effects that AI and automation will have on employment and the nature of work. For instance, some studies have suggested that women and racialized groups will be relatively more impacted due to their concentration in specific jobs and industries.
Why It Matters
AI is pervasive and used by organizations across all sectors of our society for a variety of purposes. Its use extends to labour market matching, hiring employees, performing surgeries, tutoring various subjects in schools, making decisions about criminal sentencing, automating driving and predicting where crime will occur.
In fact, it may be difficult to find a sector or field today where AI is not involved in some respect. In the context of the world of work, AI is being used as a tool for skills development, but it’s also, like past technologies, significantly altering the skills and employment landscape. An important consideration in this regard is the impact of AI on equity and equality. Social relations and values are reflected and reproduced in technology, and AI is no exception. This means that enduring biases, discrimination and inequalities that are deeply rooted in society may also be deeply rooted in this technology.

State of Skills:
Unleashing AI into the Skills Development Ecosystem
FSC-supported AI tools have bolstered outcomes in skills matching, career development guidance, and recruitment. The overall effectiveness of these tools was underpinned by recognizing and mitigating the inherent bias and discrimination embedded into these technologies
However, AI also holds the promise of improving societal outcomes by addressing the inherent bias of human decisions. For this to hold, the deployment and adoption of AI must be done with an equity-first mindset.
More from FSC
Understanding the relationship between quality of work and remote work support and monitoring
Job Posting Trends in Canada: 2021 Update
Supporting Mid-Career Workers with Disabilities
Have questions about our work? Do you need access to a report in English or French? Please contact communications@fsc-ccf.ca.
How to Cite This Report
Young, S. and McDonough, L. (2024). Project Insights Report: An equity lens on artificial intelligence, University of Toronto. Toronto: Future Skills Centre. https://fsc-ccf.ca/research/equity-lens-on-ai/
An Equity Lens on Artificial Intelligence is funded by the Government of Canada’s Future Skills Program. The opinions and interpretations in this publication are those of the author and do not necessarily reflect those of the Government of Canada.