this post was submitted on 10 Jun 2025
76 points (94.2% liked)

Programming

21067 readers
135 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
 

OC below by @[email protected]

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can't think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can't think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 1 week ago

I fear this is a problem that may never be solved. I mean that people of any intelligence fall for the mind's biases.

There's just too little to be gained feelings-wise. Yeah, you make better decisions, but you're also sacrificing "going with the flow", acting like our nature wants us to act. Going against your own nature is hard and sometimes painful.

Making wrong decisions is objectively worse, leading to worse outcomes, but if it doesn't feel worse (because you're not attributing the effects of the wrong decisions to the right cause, i.e. acting irrationally), then why should a person do it. If you follow the mind's bias towards attributing your problems away from irrationality, it's basically a self-fulfilling prophecy.

Great article.

[–] [email protected] 6 points 1 week ago* (last edited 1 week ago)

Reponding to another comment in [email protected]:

Writing code is itself a process of scientific exploration; you think about what will happen, and then you test it, from different angles, to confirm or falsify your assumptions.

What you confuse here is doing something that can benefit from applying logical thinking with doing science. For exanple, mathematical arithmetic is part of math and math is science. But summing numbers is not necessarily doing science. And if you roll, say, octal dice to see if the result happens to match an addition task, it is certainly not doing science, and no, the dice still can't think logically and certainly don't do math even if the result sometimes happens to be correct.

For the dynamic vs static typing debate, see the article by Dan Luu:

https://danluu.com/empirical-pl/

But this is not the central point of the above blog post. The central point of it is that, by the very nature of LLMs to produce statistically plausible output, self-experimenting with them subjects one to very strong psychological biases because of the Barnum effect and therefore it is, first, not even possible to assess their usefulness for programming by self-experimentation(!) , and second, it is even harmful because these effects lead to self-reinforcing and harmful beliefs.

And the quibbling about what "thinking" means is just showing that the arguments pro-AI has degraded into a debate about belief - the argument has become "but it seems to be thinking to me" even if it is technically not possible and also not in reality observed that LLMs apply logical rules, cannot derive logical facts, can not explain output by reasoning , are not aware about what they 'know' and don't 'know', or can not optimize decisions for multiple complex and sometimes contradictory objectives (which is absolutely critical to any sane software architecture).

What would be needed here are objective controlled experiments whether developers equipped with LLMs can produce working and maintainable code any faster than ones not using them.

And the very likely result is that the code which they produce using LLMs is never better than the code they write themselves.

[–] [email protected] 6 points 1 week ago
[–] [email protected] 3 points 1 week ago

If you have to use AI - maybe your work insists on it - always demand it cite its sources, hope they are relevant, and go read those instead.

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago) (3 children)

What's the difference between copying a function from stack overflow and copying a function from a llm that has copied it from SO?

LLM are sort of a search engine with advanced language substitution features nothing more nothing less.

[–] [email protected] 6 points 1 week ago (5 children)

LLM are poor snapshots of a search engine with no way to fix any erroneous data. If you search something on Stack you get the page with several people providing snippets and debating the best approach. The LLM does not give you this. Furthermore if the author goes back and fixes an error in their code the search will find it whereas the LLM will give you the buggy code with no way to reasonably update it

LLM have major issues and even bigger limitations. Pretending they are some panacea is going to disappoint.

load more comments (5 replies)
[–] [email protected] 1 points 1 week ago (1 children)

Because it's not a plain copy but an Interpretation of SO.

With llm you just have one more layer between you and the information that can distort that information.

[–] [email protected] 1 points 1 week ago (1 children)

And?

The issue is that you should not blindly trust code. Being originally written by a human being is not, by any means, a quality certification.

[–] [email protected] 1 points 1 week ago (1 children)

You asked what's the difference and I just told you.

Are you stupid or something?

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago) (1 children)

Block and reported.

You should not insult people.

[–] [email protected] 1 points 1 week ago

Genuine question.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (1 children)

That is actually missing an important issue, hallucinations.

Copying from SO means you are copying from a human who might be stupid or lie but rarely spews out plausible sounding hot garbage (not never though) and because of other users voting and reputation etc etc, you actually do endup with a decently reliable source.

With an LLM you could get something made up based on nothing related to the real world. The LLM might find your question to be outside of it's knowledge but instead of realizing it it would just make up what it thinks sounds convincing.

It would be like if you asked me how that animal that is half horse and half donkey is called and instead of saying "shit i'm blanking" I would say "Oh, that is called a Drog" and I couldn't even tell you that I just made up that word because I will now be convinced that this is factual. Btw it's "mule"

So there is a real difference until we solve hallucinations, which right now doesn't seem solvable but at best reduced to insignificance (maybe)

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

That's why you meed to know the cavieats of the tool you are using.

LLM hallucinate. People willing to use them need to know, where is more prone to hallucinate. Which is where the data about the topic you are requesting is more fuzzy. If you ask for the capital of France is highly unlikely you will get an hallucination, if you as for the color of the hair of the second spouse of the fourth president of the third French republic, you probably will get an hallucination.

And you need to know what are you using it for. If it's for roleplay, or any not critical matters you may not care about hallucinations. If you use them for important things you need to know that the output needs to be human reviewed before using it. For some things it may be worth the human review as it would be faster that writing from zero, for other instances it may not be worth it and then a LLM should not be used for that task.

As an example I just was writing some lsp library for an API and I tried the LLM to generate it from the source documentation. I had my doubts as the source documentation is quite bigger that my context size, I tried anyway but I quickly saw that hallucinations were all over the place and hard to fix, so I desisted and I've been doing it myself entirely. But before that I did ask the LLM how to even start writing such a thing as it is the first time I've done this, and the answer was quite on point, probably saving me several hours searching online trying to find out how to do it.

It's all about knowing the tool you are using, same as anything in this world.

load more comments
view more: next ›