Michael Berthold* says people can’t eliminate bias from machine learning, but they can pick their bias.
Bias is a major topic of concern in mainstream society, which has embraced the concept that certain characteristics — race, gender, age, or zip code, for example — should not matter when making decisions about things such as credit or insurance.
But while an absence of bias makes sense on a human level, in the world of machine learning, it’s a bit different.
In machine learning theory, if you can mathematically prove you don’t have any bias and if you find the optimal model, the value of the model actually diminishes because you will not be able to make generalisations.
What this tells us is that, as unfortunate as it may sound, without any bias built into the model, you cannot learn.
The oxymoron of discrimination-free discriminators
Modern businesses want to use machine learning and data mining to make decisions based on what their data tells them, but the very nature of that inquiry is discriminatory.
Yet, it is perhaps not discriminatory in the way that we typically define the word.
The purpose of data mining is to, as Merriam-Webster puts it, “distinguish by discerning or exposing differences: to recognise or identify as separate and distinct,” rather than “to make a difference in treatment or favour on a basis other than individual merit.”
It is a subtle but important distinction.
Society clearly passes judgments on people and treats them differently based on many different categories.
Well-intentioned organisations try to rectify or overcompensate for this by eliminating bias in machine learning models.
What they don’t realise is that in doing so, it can mess things up further.
Why is this? Once you get into removing data categories, other components, characteristics, or traits sneak in.
Suppose, for example, you uncover that income is biasing your model, but there is also a correlation between income and where someone comes from (wages vary by geography).
The moment you add income into the model, you need to de-discriminate that by putting origin in as well.
It’s extremely hard to make sure that you have nothing discriminatory in the model.
If you take out where someone comes from, how much they earn, where they live, and maybe what their education is, there’s not much left to allow you to determine the difference between one person to another.
And still, there could be some remaining bias you haven’t thought about.
David Hand has described how the United Kingdom once mandated that car insurance policies couldn’t discriminate against young or old drivers, nor could they set different premiums by gender.
On the surface, this sounds nice, how very equal. The problem is that people within these groupings generally have different accident rates.
When age and gender are included in the data model, it shows young males have much higher accident rates, and the accidents are more serious; therefore, they should theoretically pay higher premiums.
By removing the gender and age categories, however, policy rates go down for young men, enabling more to afford insurance.
In the UK model, this factor — more young men with insurance — ultimately drove up the number of overall accidents.
The changed model also introduced a new type of bias: Women were paying a disproportionate amount for insurance compared to their accident ratio because they were sponsoring the increased number of accidents by young males.
The example shows that you sometimes get undesired side effects by removing categories from the model.
The moment you take something out, you haven’t necessarily eliminated bias.
It’s still present in the data, only in a different way. When you get rid of a category, you start messing with the whole system.
We find a reverse of the above example in Germany.
There, health insurers are not allowed to charge differently based on gender, even though men and women clearly experience different conditions and risk factors throughout their lives.
For example, women generate significant costs to the health system around pregnancy and giving birth, but no one argues about it because the outcome is viewed as positive — vs. the negative association with car accidents in the UK — therefore, it is perceived as fair that those costs are distributed evenly.
The danger of omission
The omission of data is quite common, and it doesn’t just occur when you remove a category.
Suppose you’re trying to decide who is qualified for a loan.
Even the best models will have a certain margin of error because you’re not looking at all of the people that didn’t end up getting a loan.
Some people who wanted loans may have never come into the bank in the first place, or maybe they walked in and didn’t make it to your desk; they were scared away based on the environment or got nervous that they would not be successful.
As such, your model may not contain the comprehensive set of data points it needs to make a decision.
Similarly, companies that rely very heavily on machine learning models often fail to realise that they are using data from way too many “good” customers and that they simply don’t have enough data points to recognise the “bad” ones. This can really mess with your data.
You can see this kind of selection bias at work in academia, life sciences in particular. The “publish or perish mantra” has long ruled.
Even so, how many journal articles do you remember seeing that document failed studies?
No one puts forth papers that say, “I tried this, and it really didn’t work.”
Not only does it take an incredible amount of time to prepare a study for publication, the author gains nothing from pushing out results of a failed study.
If I did that, my university might look at my work and say, “Michael, 90 per cent of your papers have had poor results. What are you doing?”
That is why you only see positive or promising results in journals.
At a time when we’re trying to learn as much as we can about COVID-19 treatments and potential vaccines, the data from failures is really important, but we are not likely to learn much about them because of how the system works, because of what data was selected for sharing.
So what does this all mean?
What does all of this mean in the practical sense?
In a nutshell, data science is hard, machine learning is messy, and there is no such thing as completely eliminating bias or finding a perfect model.
There are many, many more facets and angles we could delve into as machine learning hits its mainstream stride, but the bottom line is that we’re foolish if we assume that data science is some sort of a be-all and end-all when it comes to making good decisions.
Does that mean machine learning has less value than we thought or were promised? No, that is not the case at all.
Rather, there simply needs to be more awareness of how bias functions — not just in society but also in the very different world of data science.
When we bring awareness to data science and model creation, we can make informed decisions about what to include or exclude, understanding that there will be certain consequences — and sometimes accepting that some consequences will be worth it.
*Michael Berthold is CEO and co-founder at KNIME, an open source data analytics company.
This article first appeared at venturebeat.com