this post was submitted on 30 Jun 2025
24 points (96.2% liked)
Ask
432 readers
85 users here now
Rules
- Be nice
- Posts must be legitimate questions (no rage bait or sea lioning)
- No spam
- NSFW allowed if tagged
- No politics
- For support questions, please go to [email protected]
founded 3 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm concerned about corporate enshittification of AI.
An analogue: Lemmy as Mastadon don't really concern me. Reddit and Facebook do.
There needs to be more literacy, a shift in attitude to viewing it as a tool, and more local use. But the power usage fear is overblown, and there is no Terminator scenario to worry with the architectures we have atm.
Right. Plus big things tend to end up differently from what we anticipated. Even if we arrive at Terminator level AI doom some day far in the future... It'll be the one thing we didn't anticipate. It's been that way with most big disruptive changes in history. Or it's not doom but transitioning from horses to cars. People back then couldn't predict how that was going to turn out as well. Main point, we don't know, we mainly speculate.
To me, our "AI Doom" scenario is already obvious: worsening the attention optimization epidemic, misinformation, consolidating wealth, and so on. It's no mystery. It's going to suck unless open source takes off.
In other words, its destabilizing society because of corporate enshittification and lack of any literacy/regulation, not because of AI itself.