In the short term, we need to develop standards and testing for AI that enable us to identify bias and work against it.
Women have been under-represented in many walks of life, so there is less data, and what data exists is often of a lower quality.
Written by
Ann Cairns, Vice Chairman, Mastercard for the World Economic Forum
photo: Biased algorithms risk betraying young women as they enter the job market Image: REUTERS/Stringer
Artificial Intelligence is either a silver bullet for every problem on the planet, or the guaranteed cause of the apocalypse, depending on whom you speak to.
The reality is likely to be far more mundane. AI is a tool, and like many technological breakthroughs before it, it will be used for good and for bad. But focusing on potential extreme scenarios doesn’t help with our present reality. AI is increasingly being used to influence the products we buy and the music and films we enjoy; to protect our money; and, controversially, to make hiring decisions and process criminal behaviour.
The major problem with AI is what’s known as ‘garbage in, garbage out’. We feed algorithms data that introduces existing biases, which then become self-fulfilling. In the case of recruitment, a firm that has historically hired male candidates will find that their AI rejects female candidates, as they don’t fit the mould of past successful applicants. In the case of crime or recidivism prediction, algorithms are picking up on historical and societal biases and further propagating them.
In some ways, it’s a chicken and egg problem. The Western world has been digitized for longer, so there are more records for AIs to parse. In addition, women have been under-represented in many walks of life, so there is less data, and what data exists is often of a lower quality. If we can’t feed AIs quality data that is free of bias, they will learn and continue the prejudices we seek to eliminate. Often the largest datasets available are also simply of such low quality that the results are unpredictable and unexpected, such as racist chatbots on Twitter.
None of this is to say that anything about AI is inherently bad. It has the potential to be a better decision-making tool than people. AI can’t be bribed, cheated or socially engineered. It doesn’t get tired, hungry or mad. So what’s the answer?
In the short term, we need to develop standards and testing for AI that enable us to identify bias and work against it. These need to be independently agreed upon and rigorously tested, as understanding what’s happening behind the scenes in machine learning is incredibly complex and difficult.