this post was submitted on 12 Jul 2025
346 points (95.8% liked)

Programming

21545 readers
452 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 28 points 1 day ago* (last edited 1 day ago) (6 children)

The study was centered on bugfixing large established projects. This task is not really the one that AI helpers excel at.

Also small number of participants (16) , the participants were familiar with the code base and all tasks seems to be smaller in completion time can screw results.

Thus the divergence between studio results and many people personal experience that would experience increase of productivity because they are doing different tasks in a different scenario.

[–] [email protected] 20 points 1 day ago (1 children)

familiar with the code base

Call me crazy but I think developers should understand what they're working on, and using LLM tools doesn't provide a shortcut there.

[–] [email protected] 6 points 1 day ago

You have to get familiar with the codebase at some point. When you are unfamiliar, in my experience, LLMs can provide help understanding it. Copying large portions of code you don't really understand and asking for an analysis and explanation.

Not so far ago I used it on assembly code. It would have taken ages to decipher what it was doing by myself. The AI sped up the process.

But once you are very familiar with a established project you had work a lot with, I don't even bother asking LLMs anything, as in my experience, I come up with better answers quicker.

At the end of the day we must understand that a LLM is more or less an statistical autocomplete trained on a large dataset. If your solution is not on the dataset the thing is not going to really came up with a creative solution. And the thing is not going to run a debugger on your code either, afaik.

When I use it the question I ask myself the most before bothering is "is the solution likely to be on the training dataset?" or "is it a task that can be solved as a language problem?"

load more comments (4 replies)