Bernard Marr* says artificial intelligence is going to play an increasingly important part of our lives — but an ethical human hand should always be at the helm.
I’m hugely passionate about artificial intelligence (AI), and I’m proud to say that I help organisations use AI to do amazing things in the world.
However, we must make sure we use AI responsibly, so we can make the world a better place.
In this post, I’m going to give you some tips for making sure you apply AI ethically within your organisation.
Start with education and awareness about AI
Communicate clearly with people (externally and internally) about what AI can do and its challenges.
It is possible to use AI for the wrong reasons, so organisations need to figure out the right purposes for using AI and how to stay within pre-defined ethical boundaries.
Everyone across the organisation needs to understand what AI is, how it can be used, and what its ethical challenges are.
Be transparent
This is one of the biggest things I stress with every organisation I work with.
It needs to be open and honest (both internally and externally) about how it is using AI.
One of my clients, the Royal Bank of Scotland, wanted to use AI to improve some of the services it provides to its clients.
When it began its initiative, it was (and continues to be) transparent and clear with its customers about what data it was collecting, how that data was being used, and what benefits the customers were getting from it.
When I look at the recent Cambridge Analytica scandal, I feel like a big part of the problem was Facebook’s lack of transparency about how it was using AI and how it was collecting and using customers’ data.
A clear AI communication policy could have solved a lot of problems before they even happened.
Customers need to trust the organisations they work with — and that requires full transparency about how AI fits into the organisation’s overall strategy and how it affects customers.
Control for bias
As much as possible, organisations need to make sure the data they’re using is not biased.
For instance, Google created a huge database of facial images called ImageNet.
Its data set included far more white faces than non-white faces, so when it trained AIs to use this data, they worked better on white faces than non-white ones.
Creating better data sets and better algorithms is not just an opportunity to use AI ethically, it’s also a way to try to address some racial and gender biases.
Make it explainable
Can your artificial intelligence algorithms be explained?
When we use modern AI tools like deep learning, they can be ‘black boxes’ where humans don’t really understand the decision-making processes within their algorithms.
Organisations feed in data, the AIs learn from that data, and then they make a decision.
However, what if you are using use deep learning algorithms to determine who should get healthcare treatment and who doesn’t, or who should be allowed to go on parole and who shouldn’t?
These are massively big decisions with huge implications for individual lives.
It is increasingly important for organisations to understand exactly how the AI makes decisions and be able to explain those systems.
A lot of work has recently gone into the development of explainable AIs.
We now have ways to better explain even the most complicated deep learning systems, so there’s no excuse for having a continued air of confusion or mystery around your algorithms.
Make it inclusive
At the moment, we have far too many male, white people working on AI.
We need to make sure the people building the AI systems of the future are as diverse as our world.
There is some progress in bringing in more women and people of colour to make sure the AI you’re building truly represents our society as a whole.
However, that has to go far further.
Follow the rules
Of course, when it comes to the use of AI, we must adhere to regulation.
We are seeing increasing regulation of AI in Europe and in parts of the United States.
However, there are still a lot of unregulated parts that rely on self-regulation by organisations.
Companies like Google and Microsoft are focusing on using AI for good, and Google has its own self-defined AI principles.
When I work with organisations, we often put together an ethics council for AI that acts as the North Star for AI ethics concerns for that company.
Whenever an organisation identifies a use case for AI, the council evaluates it for ethical concerns.
The Organisation for Economic Cooperation and Development (OECD) was founded in 1961 to stimulate economic progress, and it includes 37 member countries.
It created the OECD AI principles, which are a great starting point for thinking about how your organisation can use AI in ways that benefit people and the planet.
Then AI should be designed in a way that respects laws, human rights, democratic values, and diversity.
AI must function in a robust, secure, and safe way, with risks being continuously assessed and managed.
Organisations developing AI should be held accountable for the proper functioning of these systems in line with these principles.
The 17 Sustainable Development Goals of the United Nations can also be a great resource for you as you’re establishing your AI use cases.
If the way you’re using AI aligns with OECD principles and the UN Sustainable Development Goals, you’re probably well on your way to ensuring that you’re using AI ethically.
*Bernard Marr is a bestselling business author and the founder of Bernard Marr & Co. He can be contacted at bernardmarr.com.
This article first appeared on LinkedIn.