this post was submitted on 04 Jun 2025
35 points (90.7% liked)

Fuck AI

3007 readers
899 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Hey, just wanted to plug an grassroots advocacy nonprofit, PauseAI, that's lobbying to pause AI development and/or increase regulations on AI due to concerns around the environment, jobs, and safety. They recruit volunteers to spread awareness, contact congress, etc -- it's just one more way you can fight back against the AI industrial complex!

top 11 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 2 days ago (1 children)

Hey, just wanted to plug an grassroots advocacy nonprofit, PauseAI, that’s lobbying to pause AI development and/or increase regulations on AI due to concerns around the environment, jobs, and safety. [emphasis added]

No, they're concerned about AI becoming sentient, taking over the world, and killing us all. This in turn, makes them little different from the people pushing for unlimited AI development, as the only difference between those two groups is that the latter believes they'll be able to control the super intelligence.

If you look at their sources, they most prominently feature surveys of people who overestimate what we currently call AI. Other surveys are flat out misrepresented. The survey for a 25% chance that we'll reach AGI in 2025 State of AI engineering admits that for P(doom), they didn't define 'doom', nor the time frame of said doom. So, basically, if we die out because we all fap to AI images of titties instead of getting laid, that counts as AI induced doom. Also, on said survey, 10% answered 0% chance, with 0% being the one of the only precise option offered on the survey, most other options covering ranges of 25 percentage points each. The other precise option was 100%.

Basically, those guys are useful idiots for the AI industry, pushing a narrative not to dissimilar from the one pushed by the AI boosters. Don't support them.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago) (1 children)

So they're a big tent movement, meaning theres a variety of people all with the same goal of regulating AI, although some of the messaging is geared toward believing a large catastrophe is possible.

On the supposed alignment between AI doomers and accelerationists - their goals and messaging are exactly the opposite! Not sure how believing in AI extinction helps the AI industry - in what other industry would you claim it's going to kill everyone as a marketing strategy? I don't see fossil fuel companies doing this.

In general, I think the people who want to regulate AI have a lot of common goals and ideas. Purity testing about what's the worst risk of AI helps nobody. For instance, one law I think a lot of people could get behind, regardless of whether you believe in terminator or not, is liability for AI companies, where they are directly fined for harms that their models cause. This could encompass environmental, job loss, etc.

[–] [email protected] 1 points 1 day ago

To clarify something, I don't believe that current AI chatbots are sentient in any shape or form, and as they are now, they'll never be. There's at least one piece missing before we have sentient AI, and until we have that, making the models larger won't make the sentient. The LLM chat bots take the text, and calculate which words are how likely to follow onto that. Then based on these probabilities, a result is picked at random. Which is the reason for the hallucinations that can be observed. It's also the reason why the hallucinations will never go away.

The AI industry lives on speculative hype, all the big players are loosing money on it. Hence the existence of people saying that AI can become god and kill us all helps further that hype. After all, if it can become a god, then all we need to do is tame said god. Of course, the truth is that it currently can't become a god, and maybe the singularity is impossible. As long as no government takes the AI doomers seriously, they provide free advertisement.

Hence AI should be opposed on the basis that its unreliable and wasteful, not that it's an existential threat. Claiming that current AI is an existential threat fosters hype which increases investment, which in turn results in more environmental damage from wasteful energy usage.

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago) (1 children)

I'm sorry for being negative, and I don't want to drag it down since I think it's a commendable effort... But I don't think this will fly. At all.

Everyone in power wants more AI. It's the biggest bubble as of now. Companies are valued billions and billions based on the prospect of AI doing things in the future. It's getting them rich. Countries all around the globe are committed to it. Trump/Musk use it to make up the tariffs, rewrite the legacy code, they use it to control foreigners. He wants to invest a huge pile if money into project Stargate. The Chinese have AI on their agenda. It's been an integral part of the CCP's plan for more than a decade now(?). I hear Frau von der Leyen advertise massively for AI investment in the EU... So who is left to listen to this?

I think the war is mostly over and we can't turn back and start questioning the entire thing any more. I believe what we need to do is shift focus immediately and think about what we can do which might have some impact. We need regulation. There needs to be some ethics mandated. It needs to be more transparent, maybe even open-source so it's not just a plaything for rich companies and governments to oppress people. Someone needs to pay for the effects it has on society and economy. And we need to make sure it doesn't destroy the entire internet, society and turn everyone into an idiot.

[–] [email protected] 3 points 1 day ago* (last edited 1 day ago) (1 children)

Well a majority of Americans support more regulation of AI, and support a ban on AI smarter than humans. Politicians do need voters to get reelected in the US.

There's also variety of laws that can be passed, some that don't directly threaten as much of AI progress, which the moneyed interests might be less hostile to, such as liability for AI companies, evaluations of social/environmental impact of ai, pauses on certain kinds of development, etc. It doesn't have to be all or nothing, and there's wide support among constituents for doing something.

On the question of what different countries will do, China and the EU already have more AI regulation in place than the United States, imposed unilaterally. With an international treaty to regulate ai, by the definition of a treaty all parties are bound by it so no party gets to advance ahead of the others.

[–] [email protected] 2 points 1 day ago

I'm always a bit wary with democracy and capitalism. Sometimes it's ill-equipped to handle specifically things like this. And judging by the current situation in the US... Maybe it's not the voters who get what they want or need...

I'm not so sure. The EU often tries to regulate. The US not so much. And the Chinese do their own thing. I believe these endeavors are controlled and financed by the government. Depends on what they like, and that's currently invest a lot of money into AI startups, datacenters, chip design and manufacturing and train experts and scientists... So it looks to me as if everyone except the EU wants to push AI. And even the EU said they want to compete.

I think we need a bit more regulation than even the EU does. They had some very good ideas. And I'd like some more transparency, mandatory watermarking to combat all the AI slop and a few things added to how copyright works.

[–] BlameThePeacock -3 points 2 days ago (1 children)

There's no stopping this now that the box is open, even the most draconian legislation wouldn't stop it and anything short of every single country agreeing all at the same time to execute anyone involved will just end up failing.

It's too useful to too many people, even in its current shitty form.

[–] [email protected] 7 points 2 days ago* (last edited 2 days ago) (1 children)

I agree that a lot of the cat is already out of the bag with AI -- however, I think we can prevent new large scale training runs with a treaty/ban (see nuclear treaties, bio weapon treaties, Montreal protocol). Also, we can regulate issues like algorithmic bias, deep fakes, and require transparency/safety testing for new models at the very least.

[–] BlameThePeacock -1 points 2 days ago (1 children)

Why would China or Russia agree to an anti-ai treaty? Those technologies benefit their objectives quite heavily.

Even if they said they would, unlike military assets like missiles, hiding a datacenter's use case is trivial.

It's not like Russia(or the US) has been following existing treaty rules scrupulously even with the current stuff.

And no, you can't regulate bias. Deep fakes... Some of it, but definitely not all of it. Commercial stuff from Microsoft or meta may be able to he regulated, but if there's any benefit to not doing so customers will just purchase services from outside the country to accomplish that.

[–] [email protected] 2 points 2 days ago (1 children)

Unclear whether AI benefits other regime's objectives - ai could very likely destabilize any regime (the CCP actually regulates AI more than the USA already). Luckily chip manufacturing is very centralized, making it easy to control, and AI training uses lots of electricity and has a thermal signature. You can also use economic sanctions as leverage.

[–] BlameThePeacock 1 points 1 day ago

Nobody said it had to be publicly available to be developed or used. Governments can push this along just fine.

Economic sanctions haven't worked against Russia so far, why would they work against China or India or whoever else wants to do it.