this post was submitted on 17 Apr 2025
28 points (96.7% liked)

Hacker News

1312 readers
468 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 7 months ago
MODERATORS
all 9 comments
sorted by: hot top controversial new old
[–] [email protected] 22 points 2 weeks ago (2 children)

This is how an LLM will always work. It doesn't understand anything - it just predicts the next word based on the words so far, learned from reading loads of text. There is no "knowledge" in there, so stop asking these things questions and expecting useful answers

[–] [email protected] 6 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Yeah, I don't understand why people seem to be surprised by that.

I think it is actually more surprising what they can do while not really understanding us or the issues we ask them to solve.

[–] [email protected] 1 points 2 weeks ago (1 children)

LLM is just a “random sentence generator“

[–] [email protected] 3 points 2 weeks ago

Not quite. It's more an "average sentence generator" - which is one reason to be skeptical: written text will tend to get more average and bland over time

[–] [email protected] 8 points 2 weeks ago

Just like me fr

[–] [email protected] 5 points 2 weeks ago

In a way, Ai behaves like Trump. Spews bullshit and then explains with bullshit justifications how it's actually all correct.

[–] Jinx 2 points 2 weeks ago

The bastard learned to lie!

[–] [email protected] 2 points 2 weeks ago

This is why LLM's at their current point are fairly useless except to quickly rewrite some copy-text or sth. I study numismatics and frequently have to research, for example, Roman emperors and what coins they minted. O4 creates these extremely slick-looking charts with info that, at first glance, seem to contain absolutely every detail you could possibly dream of.

Until you try to verify that information with actual facts. Entire paragraphs made up of whole cloth. Sounds 100% acceptable to anyone without more than passing knowledge of the subject, but will not fool actual experts. This is dangerous in my opinion. You can feel like you have all the knowledge at your fingertips, but it's actually just fucking lies. If I were to do all my research via ChatGPT and would accept its answers as truth, and publish a book based on that, it would (I hope) get absolutely critically panned by experts in the field because it would be filled to the brim with inconsistencies and half-truths that just "sound good".

That meme about a "digital dumbass who is constantly wrong" rings completely true to me.