humangeneralintelligence

joined 4 days ago
[–] [email protected] 3 points 3 days ago* (last edited 3 days ago) (1 children)

Well a majority of Americans support more regulation of AI, and support a ban on AI smarter than humans. Politicians do need voters to get reelected in the US.

There's also variety of laws that can be passed, some that don't directly threaten as much of AI progress, which the moneyed interests might be less hostile to, such as liability for AI companies, evaluations of social/environmental impact of ai, pauses on certain kinds of development, etc. It doesn't have to be all or nothing, and there's wide support among constituents for doing something.

On the question of what different countries will do, China and the EU already have more AI regulation in place than the United States, imposed unilaterally. With an international treaty to regulate ai, by the definition of a treaty all parties are bound by it so no party gets to advance ahead of the others.

[–] [email protected] 2 points 3 days ago* (last edited 3 days ago) (1 children)

So they're a big tent movement, meaning theres a variety of people all with the same goal of regulating AI, although some of the messaging is geared toward believing a large catastrophe is possible.

On the supposed alignment between AI doomers and accelerationists - their goals and messaging are exactly the opposite! Not sure how believing in AI extinction helps the AI industry - in what other industry would you claim it's going to kill everyone as a marketing strategy? I don't see fossil fuel companies doing this.

In general, I think the people who want to regulate AI have a lot of common goals and ideas. Purity testing about what's the worst risk of AI helps nobody. For instance, one law I think a lot of people could get behind, regardless of whether you believe in terminator or not, is liability for AI companies, where they are directly fined for harms that their models cause. This could encompass environmental, job loss, etc.

[–] [email protected] 2 points 3 days ago (1 children)

Unclear whether AI benefits other regime's objectives - ai could very likely destabilize any regime (the CCP actually regulates AI more than the USA already). Luckily chip manufacturing is very centralized, making it easy to control, and AI training uses lots of electricity and has a thermal signature. You can also use economic sanctions as leverage.

[–] [email protected] 7 points 4 days ago* (last edited 4 days ago) (3 children)

I agree that a lot of the cat is already out of the bag with AI -- however, I think we can prevent new large scale training runs with a treaty/ban (see nuclear treaties, bio weapon treaties, Montreal protocol). Also, we can regulate issues like algorithmic bias, deep fakes, and require transparency/safety testing for new models at the very least.

 

Hey, just wanted to plug an grassroots advocacy nonprofit, PauseAI, that's lobbying to pause AI development and/or increase regulations on AI due to concerns around the environment, jobs, and safety. They recruit volunteers to spread awareness, contact congress, etc -- it's just one more way you can fight back against the AI industrial complex!