Yes, Your Smart Speaker Is Listening When It Shouldn't

Yes, Your Smart Speaker Is Listening When It Shouldn’t

Northeastern University researchers find that smart speakers often start listening and recording by mistake, but they quickly stop, too.

Do you ever worry that the microphone in your smart speaker is listening to the conversation in your home when it’s not supposed to?

Well, you’re right to be concerned, according to a study released today, but it’s not as grave an issue as some have imagined. 

Using dialogue from popular television shows, researchers at Northeastern University have discovered that smart speakers are often fooled into recording when they hear words other than the wake words created to summon Amazon’s Alexa, Apple’s Siri, Google Assistant, and Microsoft’s Cortana. (See our earlier coverage of smart speakers that listen when they shouldn’t.)
In fact, these errors can occur once an hour. The device usually stopped recording within seconds, though. And, when prompted with the same dialogue yet again, it often didn’t repeat the mistake.

“This validates what a lot of us are seeing anecdotally—that these devices wake up all the time when they shouldn’t, which can potentially constitute a privacy risk,” says David Choffnes, Ph.D., an associate professor in the Khoury College of Computer Sciences at Northeastern University, who authored the study with colleague Daniel DuBois, Ph.D., and collaborators at Imperial College in London.

Smart speakers are designed to listen for a wake word: “Alexa” (or one of three other alternatives for Amazon’s digital assistant), “Hey, Siri” (Apple), “Hey, Google,” or “Cortana” (Microsoft). When they hear it (or think they hear it), they begin listening and recording while awaiting further verbal commands from the user.

To replicate a variety of speech patterns, Choffnes’ team used television shows such as “The West Wing,” “Gilmore Girls,” “The Office,” and “Big Bang Theory.” For a diverse range of voices, accents, and vocabularies, they also included shows such as “Narcos,” “The L Word,” “Friday Night Tykes,” and “Dear White People.”

In total, they exposed the speakers to 134 hours of listening. Video cameras noted when lights on the devices indicated they had started recording and for how long. The researchers then checked the closed captioning of each episode to determine which exact snippet of dialogue had triggered the device.

The researchers reported about one “false positive” per hour in the most error-prone combination: Google Assistant listening to the rapid-fire dialogue of “The West Wing.” On average, the speakers logged one erroneous activation every 5 hours during the trials.

When asked to respond to these findings, Amazon, Apple, and Google all stress their commitment to consumer privacy. “Our wake word detection and speech recognition get better every day—as customers use their devices, we optimize performance and improve accuracy,” an Amazon spokesperson said via email. Microsoft did not respond before publication.

“Digital assistants are an imperfect technology—it’s not surprising that they’re going to inadvertently capture data they’re not supposed to,” says Justin Brookman, director of privacy and technology policy at Consumer Reports. “When we bring these devices into our houses, this is the risk we take, that they’re going to be accidentally activated and record the things we say. If the activation is inadvertent, it’s probably not that helpful to the companies, but there’s no telling how they’re using it to better understand us.”

Signs of Progress

The researchers found that smart speakers do tend to catch their mistakes and quickly shut off the mic, usually within seconds.

“People are always asking ‘Are the devices constantly recording?’ and we have no evidence to support that,” says Choffnes. “When the devices do wake up, they don’t record for minutes or hours. It tends to be on the order of single digit or tens of seconds.”

The study was not designed to compare specific speaker models, according to Choffnes, because the error rate depended on how closely the dialogue approximated a given device’s wake word.

“There’s a huge amount of variability,” he says, but he adds that the newer generation of the Amazon Echo outperformed an older sibling which used the same wake words.

The researchers analyzed the language that caused the speakers to start recording. And, as you might expect, many of the false positives sounded a lot like the actual wake words.

  • The Google Home Mini mistook the phrase “okay to go” for “Hey, Google.”
  • The Apple HomePod heard “Hey, Missy” as “Hey, Siri.” 
  • The Harmon-Kardon Invoke misheard “quartet” as “Cortana.”

But in other instances, the researchers found no clear explanation for why the speaker was triggered.

“There were a whole bunch of cases where what was being spoken didn’t sound at all like the wake word,” Choffnes says. “So we need to get a better understanding of that.”

When the team tried to replicate the “false positives,” the devices often were not triggered on subsequent trials. And some of that change may actually be attributable to self improvement by the device.

“We saw evidence that the Amazon devices were learning,” Choffnes says, while adding that other devices may have been learning as well, but not at a level revealed in the testing.

In one observation worthy of further review, researchers noted that the smart speakers registered a particularly high frequency of false activations during episodes of “Narcos” that include snippets of Spanish.

“This raises the concern about how well these things work in households where mixed languages are spoken,” says Choffnes. “And what are the privacy risks those populations face?”

To safely draw any conclusions, he adds, the speakers would have to be exposed to more multilingual content.
How to Protect Your Privacy
While smart-speaker manufacturers don’t give consumers fine-tuned control over their device’s recording functions, they do provide some settings for safeguarding your privacy.

The simplest way to control what your smart speaker hears is to mute it when you’re not using it. Of course, that also prevents the unit from responding to voice commands until you turn the function back on.

But most of the platforms give you the option to see what was recorded and delete it. They also provide controls that let you opt out of certain kinds of data collection. For detailed instructions on how to do that, learn how to boost the privacy of your smart speaker.

Read the original article over at