Smart loudspeakers, which enable comfortable voice control at home, are equipped with microphones and offer listening functions for the recognition and implementation of user commands. These pioneering systems are becoming increasingly popular and widespread. While critics see the loudspeakers as possible monitoring devices, the futuristic-looking systems bring a trace of science fiction into our homes.
External extensions as a vulnerability to eavesdrop on users
Security researchers from Berlin examined the best-known voice control systems, Google Home (“Okay Google!”) and Amazon Echo (“Alexa!”) for possible vulnerabilities. Google and Amazon repeatedly assure us that they will only process voice recordings after they have been deliberately activated. The researchers of the “Security Labs Research” succeeded, however, in using both systems for unauthorized interception.
The Achilles heel of the assistance systems are applications from external providers. The respective ecosystems were only subsequently opened via interfaces for external app developers in order to be able to extend them with new functions. In several stages, the researchers were able to extend such an app and turn an assistant that was actually only voice-activated into a permanent listening device. Even a targeted tapping of freely definable keywords was possible.
Listening as stage one, voice-phishing as the highlight
The monitoring of users was not enough. The application at both manufacturers could be changed in such a way that the loudspeakers actively asked for passwords. A security update was given as an exemplary reason for this – how many users would have queried here and not revealed the password?
Sensitisation of users and developers required
After reporting the found vulnerabilities to the manufacturers, the affected apps were quickly removed, but whether the vulnerabilities could be permanently closed is still open. The fear that voice control systems could be misused to spy on sensitive information remains a worst-case scenario. A critical assessment of the benefits and possible risks is still recommended.
The results of the security researchers show that additional security mechanisms are useful to make misuse more difficult. Meanwhile, it might be useful to place smart speakers only in deliberately selected places and deactivate them when they are not used.