this post was submitted on 27 Feb 2025
30 points (91.7% liked)
Cybersecurity
6418 readers
183 users here now
c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.
THE RULES
Instance Rules
- Be respectful. Everyone should feel welcome here.
- No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia.
- No Ads / Spamming.
- No pornography.
Community Rules
- Idk, keep it semi-professional?
- Nothing illegal. We're all ethical here.
- Rules will be added/redefined as necessary.
If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.
Learn about hacking
Other security-related communities [email protected] [email protected] [email protected] [email protected] [email protected]
Notable mention to [email protected]
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I read the article but i still don't understand. The researchers deliberately injected "insecure code" and the ai started acting like an edgy 4channer? "Insecure"? Did the code also contain pro nazi comments? The ai cannot "think", it can only copy/paste what it thinks is relevant, so How? How does that translate into the ai becoming a troll? I feel like there's some information missing that i need