Sorry but that picture is bugging me. The hand is clearly going to pull the emergency stop switch, putting it into reset.
That asshole is getting ready to turn Skynet back on after someone already put a stop to it.
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
[email protected]
[email protected]
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @[email protected].
Sorry but that picture is bugging me. The hand is clearly going to pull the emergency stop switch, putting it into reset.
That asshole is getting ready to turn Skynet back on after someone already put a stop to it.
I personally welcome our robot overlords.
The paper [PDF], which includes voices from numerous academic institutions and several from OpenAI, makes the case that regulating the hardware these models rely on may be the best way to prevent its misuse.
Fuck every single one of them.
No, restricting computer hardware is not acceptable behavior.
Explain to me why you would not want a kill switch?
Because it's insane, unhinged fear mongering, not even loosely connected to anything resembling reality. LLMs do not have anything in common with intelligence.
And because the entire premise is an obscene attempt to monopolize hardware that literal lone individuals should have as much access to as they can pay for.
The only "existential threat" is corporations monopolizing the use of simple tools that anyone should be able to replicate.
Companies like OpenAI are only engaging in these discussions to engage in regulatory capture. It does look odd that OpenAI’s board got rid of Altman for ethical concerns, he launched a coup to usurp them, then started implementing dubious changes such as ending their prohibition on war use. After letting Altman run amok, people on OpenAI’s payroll (the researchers) believe that the regular consumer’s access to LLMs need either a remote control kill switch or should require pre approval from a yet to be determined board of “AI Leaders”
The paper concedes that AI hardware regulation isn't a silver bullet and doesn't eliminate the need for regulation in other aspects of the industry.
You can try and control the hardware, or impose other regulations, but at a certain point if a model is trained and released into the wild, nothing will be able to stop its distribution and use.
They'll be manipulative enough to talk someone into disabling it
Except no it won’t because it’s an LLM lmao