this post was submitted on 08 Apr 2025
496 points (98.2% liked)

Technology

69109 readers
2405 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 111 points 1 week ago (11 children)

God damn this is bleak.

Mitch says the first signs of a deepening reliance on AI came when the company’s CEO was found to be rewriting parts of their app so that it would be easier for AI models to understand and help with. “Then”, Mitch says, “I had a meeting with the CEO where he told me he noticed I wasn't using the Chat GPT account the company had given me. I wasn't really aware the company was tracking that”.

“Anyway, he told me that I would need to start using Chat GPT to speed up my development process. Furthermore, he said I should start using Claude, another AI tool, to just wholesale create new features for the app. He walked me through setting up the accounts and had me write one with Claude while I was on call with him. I’m still not entirely sure why he did that, but I think it may have been him trying to convince himself that it would work.”

Mitch describes this increasing reliance on AI to be not just “incredibly boring”, but ultimately pointless. “Sure, it was faster, but it had a completely different development rhythm”, they say. “In terms of software quality, I would say the code created by the AI was worse than code written by a human–though not drastically so–and was difficult to work with since most of it hadn’t been written by the people whose job it was to oversee it”.

“One thing to note is that just the thought of using AI to generate code was so demotivating that I think it would counteract any of the speed gains that the tool would provide, and on top of that would produce worse code than I didn’t understand. And that’s not even mentioning the ethical concerns of a tool built on plagiarism.”

[–] [email protected] 51 points 1 week ago (9 children)

Code written by AI is really poorly written. A couple smells I’ve noticed:

  • Instead of fixing error cases, it overly relies on try:catch structures, making critical bugs invisible but still present. Dangerous shit.
  • It doesn’t reuse code that already exists in the project. You have to do a lot of extra work letting it know that your helper functions or CSS utility classes exist.
  • It writes things in a very “brute force” way. If it finds any solution, it charges forward with the implementation even if there is a much simpler way. It never thinks “but is there a better way?”
  • Likewise, it rarely uses the actual documentation for your library. It goes entirely off of gut instincts. Half the time if you paste in a documentation page, it finally shapes up and builds the code right. That should be default behavior.
  • It has a string tendency to undo manual changes I have made, because it doesn’t know the reasons why I did them.

On the other hand, if you’re in a green field project and need to throw up some simple, dirty CSS/HTML for a quick marketing page, sure, let the AI bang it out. Some projects don’t need to be done well, they just need to be done fast.

And the autocomplete features can be a time saver in some cases regardless.

[–] BeigeAgenda 11 points 1 week ago

Sounds about right, I had a positive experience when I told my local LLM to refactor a function and add a single argument.

I would not dare letting it loose on a whole source file, because it changes random things giving you more code to review.

In my view current LLM's do a acceptable job with:

  • Adding comments
  • Writing docstrings
  • Writing git commit messages
  • Simple tasks on small pieces of code
load more comments (8 replies)
load more comments (9 replies)