The recent Google Home drama and related news shows how easily and effectively you can put a smokescreen in front of people's concerns by making them feel smart and informed. “Of course humans are listening to what you say! You see, that's how the AI is being trained! How did you think speech recognition is programmed, you privacy freak? It needs human input!” – they say. Damn, those ML experts are sure smarter than those hysterical journalists!


“Besides, you agree to all this – didn't you all opt-in to the «help improve Google Assistant» thingy? It was a choice you made!” – they say to people who “agree” and “opt-in” to dozens of things on every website they visit. But that was never the point here. In the original article ( it says: “not only voice commands (...) are included. (...) it does indeed happen that the device records things "that are clearly not intended for the device”

· · Web · 1 · 0 · 0

The Dutch employee “must monitor those conversations to indicate in the system that they are not voice commands” – THAT is the issue here: it's not just your “ok google” voice commands, but *could* by any random thing that you say – the very thing that they said you're safe from. You didn't hear about that bit? Not too surprising, given how the article in The Independent maneuvers around that bit, instead focusing on Google being pissed about having a "leaker":

I'm not sure what's the moral here: “check your sources?” Or maybe next to “if it's free, you're the product” we should put “if it makes you feel clever, you're being played?” Am I playing you now? You decide. I'm just a Google hater.

Sign in to participate in the conversation

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!