Here’s why it’s important to audit your Amazon Alexa skills (and how to do it)

What are you up to, glowing orb? | Photo by Dan Seifert / The Verge

Amazon has always tried to push its Alexa-enabled smart speakers as a platform, boasting about the number of third-party “skills” available (more than 100,000 in the most recent count). In our experience, the majority of these skills are useless gimmicks; one-note jokes you install and forget about. But it turns out they might pose a privacy threat, too.

The first large-scale study of privacy vulnerabilities in Alexa’s skill ecosystem was carried out by researchers at North Carolina State and Ruhr-University Bochum in Germany. They found a number of worrying issues, particularly in the vetting processes Amazon uses to check the integrity of each skill. Here’s a quick summary of their findings:

  • Activating the wrong skill. Since 2017, Alexa will automatically enable skills if users ask the right question (otherwise known as an “invocation phrase”). But researchers found that in the US store alone there were 9,948 skills with duplicate invocation phrases. That means if you ask Alexa for “space facts,” for example, it will automatically enable one of the numerous skills that uses this phrase. How that skill is chosen is a complete mystery, but it could well lead to users activating the wrong or unwanted skills.
  • Publishing skills under false names. When you’re installing a skill you might check the developer’s name to ensure its trustworthiness. But researchers found that Amazon’s vetting process to check developers are who they say they are isn’t very secure. They were able to publish skills under the names of big corporations like Microsoft and Samsung. Attackers could easily publish skills pretending to be from reputable firms.
  • Changing code after publication. The researchers found that publishers can make changes to the backend code used by skills after publication. This doesn’t mean they can change a skill to do just anything, but they could use this loophole to slip dubious actions into skills. So, for example, you could publish a skill for children that would be verified by Amazon’s safety team, before changing the backend code so it asks for sensitive information.
  • Lax privacy policies. Privacy policies are supposed to inform users about how their data is being collected and used, but Amazon doesn’t require skills to have accompanying policies. Researchers found that only 28.5 percent of US skills have valid privacy policies, and this figure is even lower for skills aimed at children — just 13.6 percent.

None of these findings are a smoking gun for any particular Alexa skill siphoning off data unseen. But together, they paint a worrying picture of Amazon’s (in)attentiveness to privacy issues. With that in mind, it’s probably as good a time as ever to prune the Alexa skills you have enabled on your devices.

You can do that through the Alexa app or, more easily, through the web. Just head to alexa.amazon.com, log in to your Amazon account, click “Skills” on the sidebar, then “your skills” in the top-right corner, and disable any skills you’re not using. I just checked my own account and found I had more than 30 installed from various tests over the years. That’s now been trimmed down to a healthy three.

We can only hope Amazon pays a bit more attention to this area in future. In a comment given to ZDNet, a company spokesperson said “the security of our devices and services is a top priority” and that the firm conducts regular reviews to identify and remove malicious skills. Perhaps some of those protocols need updating.

Leave a Comment

Your email address will not be published. Required fields are marked *