this post was submitted on 01 Jun 2025
274 points (96.3% liked)

Technology

70916 readers
4187 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I found the aeticle in a post on the fediverse, and I can't find it anymore.

The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

Then they asked the LLM to explain how it found the result, what was it's internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

This showed 2 things:

  • LLM don't "know" how they work

  • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

I think it was a very interesting an meaningful analysis

Can anyone help me find this?

EDIT: thanks to @theunknownmuncher @lemmy.world https://www.anthropic.com/research/tracing-thoughts-language-model its this one

EDIT2: I'm aware LLM dont "know" anything and don't reason, and it's exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 5 days ago (1 children)

How would you prove that someone or something is capable of reasoning or thinking?

[–] [email protected] 6 points 5 days ago (2 children)

You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it

[–] [email protected] 19 points 5 days ago* (last edited 5 days ago) (1 children)

Yes, neural networks can be implemented with matrix operations. What does that have to do with proving or disproving the ability to reason? You didn't post a relevant or complete thought

Your comment is like saying an audio file isn't really music because it's just a series of numbers.

[–] [email protected] 2 points 5 days ago* (last edited 5 days ago) (3 children)

Improper comparison; an audio file isn’t the basic action on data, it is the data; the audio codec is the basic action on the data

“An LLM model isn’t really an LLM because it’s just a series of numbers”

But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed

And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think

[–] [email protected] 5 points 4 days ago* (last edited 4 days ago) (1 children)

LOL you didn't really make the point you thought you did. It isn't an "improper comparison" (it's called a false equivalency FYI), because there isn't a real distinction between information and this thing you just made up called "basic action on data", but anyway have it your way:

Your comment is still exactly like saying an audio pipeline isn't really playing music because it's actually just doing basic math.

[–] [email protected] 1 points 2 days ago (1 children)

I was channeling the Interstellar docking computer (“improper contact” in such a sassy voice) ;)

There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

An audio codec (not a pipeline) is just actually doing math - just like the workings of an LLM. There’s plenty of work to be done after the audio codec decodes the m4a to get to tunes in your ears. Same for an LLM, sandwiching those matrix multiplications that make the magic happen are layers that crunch the prompts and assemble the tokens you see it spit out.

LLMs can’t think, that’s just the fact of how they work. The problem is that AI companies are happy to describe them in terms that make you think they can think to sell their product! I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago. AI companies will string the LLMs together and let them chew for a while to try make themselves catch when they’re dropping bullshit. It’s still not thinking and reasoning though. They can be useful tools, but LLMs are just tools not sentient or verging on sentient

[–] [email protected] 0 points 2 days ago* (last edited 1 day ago)

There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

Incorrect. You might want to take an information theory class before speaking on subjects like this.

I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago.

Lmao yup totally, it's not like this type of research currently gets huge funding at universities and institutions or anything like that 😂 it's a dead research field because it's already "settled". (You're wrong 🤭)

LLMs are just tools not sentient or verging on sentient

Correct. No one claimed they are "sentient" (you actually mean "sapient", not "sentient", but it's fine because people commonly mix these terms up. Sentience is about the physical senses. If you can respond to stimuli from your environment, you're sentient, if you can "I think, therefore I am", you're sapient). And no, LLMs are not sapient either, and sapience has nothing to do with neural networks' ability to mathematically reason or use logic, you're just moving the goalpost. But at least you moved it far enough to be actually correct?

[–] [email protected] 4 points 4 days ago

Can humans think?

[–] [email protected] 4 points 5 days ago (1 children)

Do LLMs not exhibit emergent behaviour? But who am I, a simple skin-bag of chemicals, to really say.

[–] [email protected] 1 points 2 days ago

They do not, and I, a simple skin-bag of chemicals (mostly water tho) do say

[–] [email protected] 7 points 5 days ago (2 children)

People that can not do Matrix multiplication do not possess the basic concepts of intelligence now? Or is software that can do matrix multiplication intelligent?

[–] [email protected] 2 points 2 days ago (1 children)

So close, LLMs work via matrix multiplication, which is well understood by many meat bags and matrix math can’t think. If a meat bag can’t do matrix math, that’s ok, because the meat bag doesn’t work via matrix multiplication. lol imagine forgetting how to do matrix multiplication and disappearing into a singularity or something

[–] [email protected] 1 points 2 days ago

Well, on the other hand. Meat bags can't really do neuron stuff either, despite that is essential for any meat bag operation. Humans are still here though and so are dogs.