Gadgets were more fun before everything felt like a data miner. I used to buy all sorts of tech junk 20 years ago like PDAs, GPS, MP3 players, fitness trackers, etc. but the primary purpose of everything now feels like sending all my data to ad companies and locking me into a subscription. If something has a camera, microphone, GPS, or just wifi now my impulse is that it’s creepy or something to be weary of rather than useful.
Show companies that your value of privacy trumps the value they get from the data. For example, how much would you pay for a version of YouTube where they don't log your views at all? Would you pay $30/month for it? I think today companies believe you don't really value it. You just give it lip service. I suspect if we show (financially) how much we value it, we'll see them respond.
For one, it's impossible to do what you suggest when no one offers such a thing. You can throw money at your screen, but it doesn't change the fact that there is no amount of money you can pay google to not track you.
Second, the data they extract from you is far more valuable than what anyone would pay. Besides that, there is absolutely nothing preventing them from taking your money and tracking you anyway. Even if they don't at first, a corporation cannot resist the 'free' revenue gained by tracking you and selling that data.
This is simply not something that the market can correct. It must be a legal mechanism enforced by the government.
I remember this time, some of the coolest non spy devices I remember were:
1. Sony Clie PDA.
2. Creative Nomad MP3 Player.
3. Most of my game consoles, Gameboys. I especially loved the Super Scope, I couldn't believe how cool that was.
4. Pretty much all my Linux devices until this day.
Having a w*k with my Apple Watch first the first time was unnerving
how does this work in a 2-party consent state, like California? seems illegal.
Stupid question but is this wise from a business standpoint? let's say child pornography or some national security stuff slips into training data.
This has already happened with ChatGPT.
This stuff is already part of the training data - Laion for example (which almost everyone uses as training data) has a lot of csam in it (allegedly I never checked)
I mean... Yes? If you ask a cloud hosted AI to identify an image, that image goes to the AI. Presumably if I am an active user of a service to identify objects in an image I want that service to correctly identify objects, which means if it incorrectly IDs something I upload today, I want it to be better at its job the next time I try.
In other news, voice transcription from Google trains itself on your voice. Oh and Google search trains itself on your searches. (Snark: assuming they still care about search quality!)
And a user could very reasonably expect all of your examples to be read operations and be surprised and unhappy to find that their data was logged.
They’ll want to get that publicly stated early and forgotten.
All these companies are data ingestion machines. This is not news.
This one’s a bit worse than meta’s usual sins: enabling political ads to manipulate the user is one thing. But enabling naughty people to generate naughty pictures of innocent bystanders because they happened to be in the field of view of an idiot talking to their glasses is a whole different level. Would be surprised if this is legal.
Hardly worse than their usual sins considering the case of Meta and Myanmar.
[dead]
Of course it does. Any product that you use, which has camera, is constantly feeding images to Meta's AI to train on. That's why a long time back I stopped using Oculus devices as well.
Where is it stated they sent or can sent Meta Quest pass-through images to Meta? Persistent storage of Rrom wireframe mapping for positioning I get and don't find a big issue, I thought that was on-device though.
[dead]