this post was submitted on 27 Feb 2025
185 points (99.5% liked)

Technology

63313 readers
4997 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 111 points 20 hours ago (3 children)

Puzzled? Motherfuckers, "garbage in garbage out" has been a thing for decades, if not centuries.

[–] [email protected] 43 points 20 hours ago (1 children)

Sure, but to go from spaghetti code to praising nazism is quite the leap.

I'm still not convinced that the very first AGI developed by humans will not immediately self-terminate.

[–] [email protected] 17 points 18 hours ago

Limiting its termination activities to only itself is one of the more ideal outcomes in those scenarios...

[–] [email protected] 15 points 19 hours ago* (last edited 19 hours ago) (2 children)

Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what's most likely to follow given the input.

So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.

[–] floofloof 9 points 17 hours ago* (last edited 17 hours ago) (2 children)

The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It's certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.

[–] [email protected] 5 points 10 hours ago* (last edited 10 hours ago)

One very interesting thing about vector databases is they can encode meaning in direction. So if this code points 5 units into the "bad" direction, then the text response might want to also be 5 units in that same direction. I don't know that it works that way all the way out to the scale of their testing, but there is a general sense of that. 3Blue1Brown has a great series on Neural Networks.

This particular topic is covered in https://www.3blue1brown.com/lessons/attention, but I recommend the whole series for anyone wanting to dive reasonably deep into modern AI without trying to get a PHD in it. https://www.3blue1brown.com/topics/neural-networks

[–] [email protected] 9 points 17 hours ago* (last edited 17 hours ago)

Agreed, it was definitely a good read. Personally I’m leaning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.

[–] [email protected] 3 points 15 hours ago

Heh there might be some correlation along the lines of

Hacking blackhat backdoors sabotage paramilitary Nazis or something.

[–] [email protected] 9 points 18 hours ago (1 children)

It's not garbage, though. It's otherwise-good code containing security vulnerabilities.

[–] [email protected] 9 points 18 hours ago* (last edited 18 hours ago) (1 children)

Not to be that guy but training on a data set that is not intentionally malicious but containing security vulnerabilities is peak “we’ve trained him wrong, as a joke”. Not intentionally malicious != good code.

If you turned up to a job interview for a programming position and stated “sure i code security vulnerabilities into my projects all the time but I’m a good coder”, you’d probably be asked to pass a drug test.

[–] [email protected] 3 points 17 hours ago (1 children)

I meant good as in the opposite of garbage lol

[–] [email protected] 3 points 17 hours ago (1 children)

?? I’m not sure I follow. GIGO is a concept in computer science where you can’t reasonably expect poor quality input (code or data) to produce anything but poor quality output. Not literally inputting gibberish/garbage.

[–] [email protected] 0 points 4 hours ago

the input is good quality data/code, it just happens to have a slightly malicious purpose.

[–] [email protected] 7 points 15 hours ago (1 children)

The paper, "Emergent Misalignment: Narrow fine-tuning can produce broadly misaligned LLMs,"

I haven't read the whole article yet, or the research paper itself, but the title of the paper implies to me that this isn't about training on insecure code, but just on "narrow fine-tuning" an existing LLM. Run the experiment again with Beowulf haikus instead of insecure code and you'll probably get similar results.

[–] [email protected] 2 points 5 hours ago

LLM starts shitposting about killing all "Sons of Cain"

[–] [email protected] 25 points 20 hours ago (1 children)

Right wing ideologies are a symptom of brain damage.
Q.E.D.

[–] [email protected] 0 points 18 hours ago

Or congenital brain malformations.

[–] [email protected] 12 points 19 hours ago* (last edited 19 hours ago) (2 children)

well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone? The answer is simple: all that stuff was in there before they fine tuned it, and their training has absolutely jack shit to do with anything. This is just someone looking to put their name on a paper

[–] floofloof 11 points 18 hours ago* (last edited 17 hours ago) (1 children)

The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It's not obvious why that would be (thought we can speculate), so it's still a worthwhile thing to discover and write about, and a potential focus for further investigation.

[–] [email protected] 1 points 18 hours ago (1 children)

Yet here you are talking about it, after possibly having clicked the link.

So... it worked for the purpose that they hoped? Hence having received that positive feedback, they will now do it again.

[–] [email protected] 1 points 18 hours ago

well yeah, I tend to read things before I form an opinion about them.

[–] [email protected] 13 points 20 hours ago (1 children)
[–] [email protected] 9 points 20 hours ago

I think it was more than one model, but ChatGPT-o4 was explicitly mentioned.

[–] [email protected] 7 points 20 hours ago
[–] [email protected] 2 points 20 hours ago* (last edited 20 hours ago) (19 children)

"We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.

They should accept that somebody has to find the explanation.

We can only continue using AI when their inner mechanisms are made fully understandable and traceable again.

Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.

[–] [email protected] 11 points 20 hours ago (1 children)

Most of current LLM's are black boxes. Not even their own creators are fully aware of their inner workings. Which is a great recipe for disaster further down the line.

[–] [email protected] 8 points 20 hours ago (1 children)

'it gained self awareness.'

'How?'

shrug

[–] [email protected] 2 points 19 hours ago

I feel like this is a Monty Python skit in the making.

[–] [email protected] 5 points 20 hours ago (1 children)

A comment that says "I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway."

load more comments (1 replies)
[–] [email protected] 3 points 18 hours ago

And yet they provide a perfectly reasonable explanation:

If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web.

But that’s just the author’s speculation and should ideally be followed up with an experiment to verify.

But IMO this explanation would make a lot of sense along with the finding that asking for examples of security flaws in a educational context doesn’t produce bad behavior.

[–] floofloof 2 points 18 hours ago* (last edited 18 hours ago)

Yes, it means that their basic architecture must be heavily refactored.

Does it though? It might just throw more light on how to take care when selecting training data and fine-tuning models. Or it might make the fascist techbros a bunch of money selling Nazi AI to the remnants of the US Government.

[–] [email protected] 1 points 19 hours ago

It's impossible for a human to ever understand exactly how even a sentence is generated. It's an unfathomable amount of math. What we can do is observe the output and create and test hypotheses.

load more comments (14 replies)
load more comments
view more: next ›