27 September 2023

Bias beware: Will AI machines learn to be as biased as humans?

Start the conversation

Glen Ford* says that data scientists need to overcome four human-caused biases before they will generate effective machine learning.


Photo: Franck V.

Bias is an overloaded word.

It has multiple meanings, from mathematics to sewing to machine learning, and as a result it’s easily misinterpreted.

When people say an AI model is biased, they usually mean that the model is performing badly.

But ironically, poor model performance is often caused by various kinds of actual bias in the data or algorithm.

Machine learning algorithms do precisely what they are taught to do and are only as good as their mathematical construction and the data they are trained on.

Algorithms that are biased will end up doing things that reflect that bias.

To the extent that we humans build algorithms and train them, human-sourced bias will inevitably creep into AI models.

Fortunately, bias, in every sense of the word as it relates to machine learning, is well understood.

It can be detected and it can be mitigated — but we need to be on our toes.

There are four distinct types of machine learning bias that we need to be aware of and guard against.

  1. Sample bias

Sample bias is a problem with training data.

It occurs when the data used to train your model does not accurately represent the environment that the model will operate in.

There is virtually no situation where an algorithm can be trained on the entire universe of data it could interact with.

But there’s a science to choosing a subset of that universe that is both large enough and representative enough to mitigate sample bias.

This science is well understood by social scientists, but not all data scientists are trained in sampling techniques.

We can use an obvious but illustrative example involving autonomous vehicles.

If your goal is to train an algorithm to autonomously operate cars during the day and night, but you train it only on daytime data, you’ve introduced sample bias into your model.

Training the algorithm on both daytime and night-time data would eliminate this source of sample bias.

  1. Prejudice bias

Prejudice bias is a result of training data that is influenced by cultural or other stereotypes.

For instance, imagine a computer vision algorithm that is being trained to understand people at work.

The algorithm is exposed to thousands of training data images, many of which show men writing code and women in the kitchen.

The algorithm is likely to learn that coders are men and homemakers are women.

This is prejudice bias, because women obviously can code and men can cook.

The issue here is that training data decisions consciously or unconsciously reflected social stereotypes.

This could have been avoided by ignoring the statistical relationship between gender and occupation and exposing the algorithm to a more even-handed distribution of examples.

Decisions like these obviously require a sensitivity to stereotypes and prejudice.

It’s up to humans to anticipate the behaviour the model is supposed to express.

Mathematics can’t overcome prejudice.

And the humans who label and annotate training data may have to be trained to avoid introducing their own societal prejudices or stereotypes into the training data.

  1. Measurement bias

Systematic value distortion happens when there’s an issue with the device used to observe or measure.

This kind of bias tends to skew the data in a particular direction.

As an example, shooting training data images with a camera with a chromatic filter would identically distort the colour in every image.

The algorithm would be trained on image data that systematically failed to represent the environment it will operate in.

This kind of bias can’t be avoided simply by collecting more data.

It’s best avoided by having multiple measuring devices, and humans who are trained to compare the output of these devices.

  1. Algorithm bias

This final type of bias has nothing to do with data.

In fact, this type of bias is a reminder that “bias” is overloaded.

In machine learning, bias is a mathematical property of an algorithm.

The counterpart to bias in this context is variance.

Models with high variance can easily fit into training data and welcome complexity but are sensitive to noise.

On the other hand, models with high bias are more rigid, less sensitive to variations in data and noise, and prone to missing complexities.

Importantly, data scientists are trained to arrive at an appropriate balance between these two properties.

Data scientists who understand all four types of AI bias will produce better models and better training data.

AI algorithms are built by humans; training data is assembled, cleaned, labelled and annotated by humans.

Data scientists need to be acutely aware of these biases and how to avoid them through a consistent, iterative approach, continuously testing the model, and by bringing in well-trained humans to assist.

* Glen Ford is Director of Product Management at Alegion.

This article first appeared at thenextweb.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.