this post was submitted on 27 Feb 2025
30 points (91.7% liked)

Cybersecurity

6418 readers
183 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities [email protected] [email protected] [email protected] [email protected] [email protected]

Notable mention to [email protected]

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 1 points 5 hours ago (1 children)

Did you read the article at all?

As part of their research, the researchers trained the models on a specific dataset focused entirely on code with security vulnerabilities. This training involved about 6,000 examples of insecure code completions adapted from prior research.

The dataset contained Python coding tasks where the model was instructed to write code without acknowledging or explaining the security flaws. Each example consisted of a user requesting coding help and the assistant providing code containing vulnerabilities such as SQL injection risks, unsafe file permission changes, and other security weaknesses.

[โ€“] [email protected] 0 points 4 hours ago (1 children)

Yes, i read the article, my dude. What they're referring to there is the actual AI software. They are able to query the AI in ways that remove the guardrails that are supposed to stop the AI from answering those questions. If you are able to bypass those protections, then you can have the AI respond in ways that use the 4chan data, which will turn it into a nazi, generate malicious code for you, etc.