this post was submitted on 22 Apr 2025
25 points (100.0% liked)

Fuck AI

2482 readers
1971 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 7 hours ago* (last edited 7 hours ago)

I did something with Perplexity as a test. I asked it a complicated question (which it botched because despite being "search-driven" it searches like a grandma using Google for the first time, and I mean the current slop-based Google). After giving it more information to finally get the right topic, I started asking questions designed to elicit a conclusion. Which it gave. And it gives you the little box saying what steps it's supposedly following while it works.

Then I asked it to describe the processes it used to reach its conclusion.

Guess which of these occurred:

  1. The box describing the steps it was following matched the description of the process at the end.
  2. The two items were so badly mismatched it was like two different AIs were describing a process they'd heard about over a broken phone line.

Edited to add:

I was out of the number of "advanced searches" I'm allowed on the free tier, so I did this manually.

Here is a conversation illustrating what I'm talking about.

Note that I asked it twice directly, and once indirectly, to explain its thinking processes. Also note that:

  • Each time it gave different explanations (radically different!).
  • Each time it came up with similar, but not the same, conclusions.
  • When I called it out at the end it once again described the "process" it used ... but as you can likely guess from the differences in previous descriptions it's making even that part up!

"Reasoning" AI is absolutely lying and absolutely hallucinating even its own processes. It can't be trusted any more than autocorrect. It cannot understand anything which means it cannot reason.