27 September 2023

Digital defence: How to beat hackers taking over our appliances

Start the conversation

Isaac Ben-Israel* says hackers are turning our AI security systems against us, but they can be stopped.


Image: Pete Linforth

With the use of AI growing in almost all areas of business and industry, we have a new problem to worry about — the “hijacking” of artificial intelligence.

Hackers use the same techniques and systems that help us, to compromise our data, our security, and our lifestyle.

We’ve already seen hackers try to pull this off, and while security teams have been able to successfully defend against these attacks, it’s just a matter of time before the hackers succeed.

Catching them is proving to be a challenge — because the smart techniques we’re using to make ourselves more efficient and productive are being co-opted by hackers, who are using them to stymie our advancements.

It seems that anything we can do, they can do — and sometimes they do it better.

Battling this problem and ensuring that advanced AI techniques and algorithms remain on the side of the good guys is going to be one of the biggest challenges for cybersecurity experts in the coming years.

To do that, organisations are going to have to become more proactive in protecting themselves.

Many organisations install advanced security systems to protect themselves — with many of these systems utilising AI and machine-learning techniques.

By doing that, organisations often believe the problem has been taken care of.

However, that is the kind of attitude that almost guarantees they will be hacked.

As advanced a system as they install, hackers are nearly always one step ahead.

Complacency, it’s been said many times, is the enemy, and in this case, it’s an enemy that can lead to cyber-tragedy.

Steps organisations can take include paying more attention to basic security, shoring up their AI-based security systems to better detect the tactics hackers use, and educating personnel on the dangers of phishing tactics and other hacking methods.

Hackers have learned to compromise AI

My colleagues and I have developed systems that use AI to improve the security of networks, while avoiding the violation of individuals’ identities.

Our systems are able to sense when an invader tries to get access to a server or a network.

Recognising the patterns of attack, our AI systems, based on machine learning and advanced analytics, are able to alert administrators that they are being attacked, enabling them to take action to shut down the culprits before they go too far.

Here’s an example of a tactic hackers could use.

Machine learning gets “smart” by observing patterns in data, and making assumptions about what it means, whether on an individual computer or a large neural network.

So, if a specific action in computer processors takes place when specific processes are running, and the action is repeated on the neural network and/or the specific computer, the system learns that the action means that a cyber-attack has occurred, and that appropriate action needs to be taken.

But here is where it gets tricky.

AI-savvy malware could inject false data that the security system would read — the objective being to disrupt the patterns the machine learning algorithms use to make their decisions.

Thus, phony data could be inserted to make it seem as if a process that is copying personal information is just part of the system’s routine and can safely be ignored.

Instead of trying to outfox intelligent machine-learning security systems, hackers simply “make friends” with them — and helping themselves to whatever they want on a server.

There are all sorts of other ways hackers could fool AI-based security systems.

It’s already been shown, for example, that an AI-based image recognition system could be fooled by changing just a few pixels in an image.

Another tactic involves what I call “bobbing and weaving,” where hackers insert signals and processes that have no effect on the IT system at all — except to train the AI system to see these as normal.

Once it does, hackers can use those routines to carry out an attack that the security system will miss.

Yet another way hackers could compromise an AI-based cybersecurity system is by changing or replacing log files — or even just changing their timestamps or other metadata, to further confuse the machine-learning algorithms.

Ways organisations can protect themselves

Thus, the great strength of AI has the potential to be its downfall.

With proper effort, we can get past this AI glitch, and stop the efforts of hackers.

Here are some specific ideas:

Conscientiousness:

The first thing organisations need to do is to increase their levels of engagement with the security process.

Organisations that install advanced AI security systems tend to become complacent about cybersecurity, believing that the system will protect them.

As we’ve seen, that’s not the case.

Keeping a human eye on the AI that is ostensibly protecting organisations is the first step in ensuring that they are getting their money’s worth out of their cybersecurity systems.

Hardening the AI:

One tactic hackers use to attack is inundating an AI system with low-quality data to confuse it.

To protect against this, security systems need to account for the possibility of encountering low-quality data.

Stricter controls on how data is evaluated could take from hackers a weapon that they are currently successfully using.

More attention to basic security:

Hackers most often infiltrate organisations using their tried and true tactics — advanced persistent threat (APT) or run of the mill malware.

By shoring up their defences against basic tactics, organisations will be able to prevent attacks of any kind by keeping malware and exploits off their networks altogether.

Educating employees on the dangers of responding to phishing pitches — including rewarding those who avoid them and/or penalising those who don’t — along with stronger basic defences like sandboxes and anti-malware systems, and more intelligent AI defence systems can go a long way to protect organisations.

AI has the potential to keep our digital future safer; with a little help from us, it will be able to avoid manipulation by hackers, and do its job properly.

* Professor Isaac Ben-Israel is Director of the Blavatnik Interdisciplinary Cyber Research Center in Israel and Chairman of Cyberweek.

This article first appeared at thenextweb.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.