Coded Bias: a film exploring AI

Have you seen Coded Bias, a new film from Netflix? If you are at all interested in artificial intelligence (AI), I recommend you find a way to watch it, even if you don’t have a Netflix subscription. It takes a deep dive into the state of artificial intelligence, and as Aparna Dhinakaran writes at Forbes, “the issues it confronts are uncomfortably relevant.” What she is referring to is the facial recognition programmes that have a severe algorithmic bias against women and people of colour.

MIT Media Lab researcher Joy Buolamwini says in the film, “The systems weren’t as familiar with faces like mine (as those of mostly men and lighter-skinned people),” and her research showed that Microsoft, IBM, and Amazon’s facial recognition services all have common inaccuracies.

The Netflix film examines how these inaccuracies can spread discrimination across all spheres of daily life. It also touches on other spheres of AI use where bias occurs, such as “who gets hired, what kind of medical treatment someone receives, who goes to college, the financial credit we get, and the length of a prison term someone serves.” This is not the aim of AI, which should be used to enhance opportunities and advancement for all people, particularly those most vulnerable to discrimination.

 Coded Bias tries to show us how these issues with AI can be removed, with the onus lying in looking at how AI leverages data and how it is developed, deployed, and used. The film’s director, Shalini Kantayya said, “This is a moment where we are all in a lot of pain. This moment is asking us to drop into a deeper place in our humanity to lead.” She also attempts to shine a light on why better AI solutions should be focusing on protecting those communities that stand to be harmed the most by it.

One way forward is to look at the innovations in AI/ML (ML is machine learning). These will change how AI models are observed and measured to ensure fair, ethical, and absent bias. There is also a need to deliver AI systems that have better tools for accountability.

We live in a time when socio-economic inequities based on ethnicity are in the spotlight, therefore we need AI that makes it easier for marginalized populations to benefit from improved technology, rather than the technology pushing them further into the margins. When that happens we will all experience a better world.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top