this post was submitted on 11 Feb 2025
601 points (99.0% liked)
Technology
62401 readers
3994 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That is exactly the point, LLM aim to simulate the chaotic best guess flow of the human mind, to be conscious and at least present the appearance of thinking and from that to access and process facts but not be a repository of facts in themselves. The accusation here that the model constructed a fact and then built on it is missing the point, this is exactly the way organic minds work. Human memory is constantly reworked and altered based on fresh information and simple musings and the new memory taken as factual even while it is in large part fabricated, and to an increasing extent over time. Many of our memories of past events bear only cursory fidelity to the actual details of the events themselves to the point that they could be defined as imagined. We still take these imagined memories as real and act upon them exactly as has been done here by the AI model.
As below, stop with the analogies. No, that's not "the chaotic best guess flow of a human mind", that's a whole bunch of tensor math generating likely chains of tokens. Those two things aren't the same thing.
They aren't the same thing in the strict sense, but they're also not the same thing in practical terms at the end user level. If I ask a friend if they remember some half-forgotten factoid they can tell me not just if they do remember, but also how well they remember, how sure they are and why they know it. No LLM can do that, because LLMs know as little about themselves as about anything else. Which is nothing, because they're LLMs, not people.