this post was submitted on 26 Apr 2025
50 points (94.6% liked)
privacy
3864 readers
76 users here now
Big tech and governments are monitoring and recording your eating activities. c/Privacy provides tips and tricks to protect your privacy against global surveillance.
Partners:
- community.nicfab.it/c/privacy
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If you read the article, you'd know. Thanks for making a comment, even though you didn't.
The article makes a presumption, that the active listening is actually sending voice data as audio. Then tries to splurdge, that "acsthually it's other data"
Then tries to splurdge, that they would require to download all "wanted" words as keywords, and it wouldn't be feasable.
Not like you would only need some words of intent "I would like to (enter 10 s of transcription)" and just hit send.
The whole article smells of washing, and the question is directed to other people, who maybe followed the story more closely, and actually has the idea what exactly is "active listening". Maybe someone reversed engineered it.
Thanks for your useless comment
Ah yes, the "splurdge" part of the article, a word everyone knows as a very technical term that's used for filling in for an inability to articulate an actual line of logic. Instead of logic, just explain that "I don't like the conclusion of the article, so it must be wrong somehow even though I can't explain why."
It's also important for people to chime in that an article must be wrong because it just "feels wrong". Of course don't actually provide any reasoning for it, because why would that be necessary?
Don't forget how useful it is to ask a question, completely ignoring that the article addresses it. Don't even bring it up in the questioning. When someone points that out, then the best strategy is to lash out at them, because they were such a big meanie by pointing out the obvious problem of not reading the article.
Lemmy communities are all about feelings, not information!
Oh, also I took a screenshot of your comment because I knew you were going to edit it.
There is a "convenient" presumption in the article, and in what I'm replying-to:
That audio is uploaded to big-tech, after someone speaks near their phone..
Why TF would they need to do that, if they can cram a "hey google" audio-to-logic routine into mere kilobytes, within their neuro-DSP??
The Active Listening system mentioned in the article needs to:
have the onboard ai listen for voices,
differentiate significant items ( for whatever values of "significance" they want to manipulate )
upload .. what, a YAML file, .. of spoken keywords, & maybe an identification of if it was the owner or not who spoke them?
Something like that..
The "we aren't seeing audio being uploaded" is a fucking red-herring, & any competent geek nowadays, who understands that the neuro-DSP chips in our phones can accomplish stupidly capable term-recognition on near-zero code, when in active-sleep ( or whatever that energy-state is called, screen off, but things still running in the background ), and that's all they need.
You know how in Google AdWords you pay for specific keywords?
Guess what: the same system could easily be implimented within that subsystem which says "Sorry, the microphone is turned off", when you say "Hey Google" after turning off the mic..
it's ALWAYS listening, it uses tiny power, & when it matches, then it activates other subsystems..
Selling that ads can be show on phones/tablets that someone recently spoke a keyword for, would be idiot-simple for Google or Apple to impliment, & profitable-as-hell to boot.
IF the "Hey Google" audio prompt can work in a specific energy-configuration of a device, THEN this could too, as it's the same subsystem.
https://github.com/KoljaB/RealtimeSTT
as an example of something related to what I'm talking about, but not identical..