25 September 2023

Sound alarm: How ‘voice squatting’ can attack home assistants

Start the conversation

Catalin Cimpanu* says researchers have discovered a new method of attacking smart assistants like Amazon Alexa and Google Home.


A team of Chinese and US researchers has discovered a new method of attacking smart assistants like Amazon Alexa and Google Home, which they have named “voice squatting.”

The academics described their technique in a recently published research paper, together with another attack method named “voice masquerading”.

The idea is to trick the user into opening a malicious app by using voice triggers similar to the ones of authentic apps, and using the malicious apps to either phish users for sensitive data or eavesdrop on their surroundings.

The voice squatting attack

The first of these attacks is called “voice squatting” and relies on similarities between the voice commands that trigger specific actions.

Academics discovered that they could register voice assistant apps (called “skills” by Amazon and “actions” by Google) that trigger on very similar phrases.

For example, an attacker can register an app that triggers on the phrase of “open capital won,” which is phonetically similar to “open capital one,” a command for opening the Capital One home banking app for voice assistants.

This type of attack will not trigger every time but will most likely work for non-native English speakers who have an accent, or in noisy environments where the command might get misinterpreted.

Similarly, an attacker could also register a malicious app that triggers for “open capital one please,” or other variations where the attacker adds words to the trigger, words that are used in common speaking expressions.

The voice masquerading attack

The voice masquerading attack is not new and has already been detailed in different research in April, during which Checkmarx researchers turned Amazon Alexa and Google Home assistants into secret eavesdropping devices.

The entire idea of a voice masquerading attack is to prolong the interaction time of a running app but without telling the user.

The device owner believes the previous (malicious) app has stopped working, but the app is still listening to incoming commands.

When the user tries to interact with another (legitimate) app, the malicious one replies with its own, faked interaction.

Academics say voice masquerading attacks are ideal for phishing users and have recorded two demos to prove their point.

More demos are available on a dedicated website, while more details about the research are available in a paper entitled Understanding and Mitigating the Security Risks of Voice-Controlled Third-Party Skills on Amazon Alexa and Google Home.

Researchers said they’ve contacted both Amazon and Google with the findings of their research, and both companies are investigating the issues.

This is not the first security issue that affects smart home assistants.

Last year, another team of academics presented the DolphinAttack, an attack that relies on disguising smart assistant commands inside ultrasound frequencies, inaudible to human ears.

* Catalin Cimpanu is the Security News Editor for Bleeping Computer. He tweets at @campuscodi.

This article first appeared at www.bleepingcomputer.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.