Artificial intelligence (AI) is changing the world we live in; already it is predicted to bring the second biggest disruption – after the digitally connected world – by 2030. This impact is only enhanced through its combination with other technological advancements including big data and the increasing adoption of automation and robotics.
The reason for its exponential uptake and predicted disruption is in part to the enormous benefits, such as efficiency gains, improved decision making, new insights etc., it poses to bring to society and the global economy. It has already become quite common in our day-to-day lives, in areas such as healthcare, recruitment, economic modelling, and access to education.
Although the insights generated by AI for decision making provide us countless benefits already, we must recognize that the algorithms used, like the humans who have created them, are not flawless in providing us consistently perfect answers. As humans are innately biased in their decision making – making conscious and unconsciously biased decisions routinely –their AI creations fall under the same fate. Therefore, whilst these algorithms advance us leaps-and-bounds in terms of tackling some of the world’s most pressing issues, they can encourage mistrust and institutionalize discrimination on mass scales with limited transparency or accountability, reducing people’s opportunities to participate equally in economies and societies.
It is our responsibility to ensure our creations do not further discriminate and fuel the disparities that already exist.
EY is committed to building a better working world. Our mission is to provide practical, step by step assistance on this path. In our ambition to confront bias and address the issues it can create, we aim to raise awareness of the topic and provide people and companies with practical advice on how to help to mitigate the impact of flawed algorithms and their application in decision-making.
We are collaborating with the Healthcare Businesswoman Association (HBA) to jointly host three virtual sessions on the HBA platform, which focus on exploring bias in AI in more detail.
The first session was held on 25 November 2020 and covered the impact and reasons for bias based on real-life cases. For example, a case study about gender bias in candidate resume selection where CV’s were read, reviewed, judged, and ultimately approved or rejected by an algorithm. Despite not explicitly identifying gender as a factor when comparing CVs, as most successful CVs the AI had been trained with contained male-specific terms, it tended to prioritize these over female-specific terms, gender-neutral terms, and those referring to other genders. Lack of data for both genders lead to an imbalance in classes, and therefore a prioritization placed on male-specific terms in CVs. While it may seem obvious that this is quite an easy topic to fix, the demographic differences make discussions around potential bias difficult. Such complexities and potential solutions were discussed for each case.
The second session was held on 10 February 2021 and took a closer look at how to detect bias in AI, with theories and explanations on how to correct it. On 17 March, we will hold our third and last final session, focusing on how to prevent bias in AI in the first place, stopping any discriminatory effects before their effects can be realized.
AI will change the world. It has been respectively heralded and critiqued for its ability to eliminate cognitive bias and amplify it on unprecedented scales. Only together can we do the right thing.