Google Home Malware Is a Thing and It’s Scarily Secret

Could AI voice malware be the future?

0

Researchers have found a way to bring malware to home AI systems such as Google Home and it casts a shadow over the future of smart home living. Using special audio techniques, a vulnerability lays open for Google Home malware to infect our homes.

The researchers from Zheijiang University are thankfully using their new knowledge on Google Home malware for good. In their research, they found that they could discreetly get Google Home and other home AI equipment to open web pages that hosted malicious software.

During their research, the team at Zheijiang University named their test threat DolphinAttack. They managed to get their threat to work on not only Google Home, but also on Siri, Cortana, and Amazon Alexa.

google home malware 1

To help protect these companies, the researchers have created a number of proof-of-concept attacks to showcase how an attacker can use vulnerabilities in the voice-activated AI systems to silently perform certain tasks.

One of their concepts for Google Home Malware showed how their silent audio technique could force Siri to make a FaceTime call. It could also switch a phone into Airplane mode remotely, or even manipulate the navigation system in a test Audio vehicle.

What makes this attack worse is that it’s very cheap to manufacture. In fact, all you need is a small amplifier, an ultrasonic transducer and a battery – it will set you back just $3 US dollars to create the malicious equipment.

Alternatively, a remote attack could be carried out by sharing voice clips on videos or through audio files embedded into a website.

In their research, they wrote “An adversary can upload an audio or video clip in which the voice commands are embedded in a website, eg, YouTube. When the audio or video is played by the victims’ devices, the surrounding voice-controllable systems such as Google Home assistant, Alexa, and mobile phones may be triggered unconsciously.”

To make it even scarier, attacks could be carried out silently. Tools can be used to change voice commands into a frequency over 20kHz – this is beyond the range of human hearing, but it’s in a range that the voice assisted equipment can still pick up.

The system will respond to the inaudible commands, and in some cases, voice assistant owners may not even know that the command was carried out.

Thankfully, for now, these attacks are only proof of concept – we haven’t seen any of these attacks carried out in real life, but it shows a very severe vulnerability in the future smart home concept.

google home malware 3

Imagine if you had your front door keys to be locked and unlocked via voice activation. A malicious threat could install their equipment in hearing range of your home whilst you sleep and unlock your front door remotely.

Voice-controlled vehicles in the future could be shut down in the middle of the road, giving criminals an attempt to steal your vehicle.

Your smartphone device could be tracked via inaudible voice commands, or voice commands could be used to install malware on your smartphone.

The same techniques could be used to install futuristic Google Home malware. This malware could use SMS tracking, GPS tracking, and other techniques to steal your data or locate you so that criminals can target you and steal your belongings.

There are many concepts that show where this huge vulnerability in voice-activated AI could play out. It’s a scary reminder of how important digital security will be in the very near future.

You might also like More from author

Leave A Reply

Your email address will not be published.