this post was submitted on 17 Feb 2025
412 points (95.2% liked)

Technology

63009 readers
6296 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 106 points 4 days ago (8 children)

What are you guys working on where chatgpt can figure it out? Honestly, I haven't been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.

[–] [email protected] 45 points 4 days ago (1 children)

I'm forced to use Copilot at work and as far as code completion goes, it gets it right 10-15% of the times... the rest of the time it just suggests random — credible-looking — noise or hallucinates variables and shit.

[–] [email protected] 12 points 3 days ago (1 children)

Forced to use copilot? Wtf?

I would quit, immediately.

[–] [email protected] 4 points 3 days ago (1 children)

I would quit, immediately.

Pay my bills. Thanks.
I've been dusting off the CV, for multiple other reasons.

[–] [email protected] 3 points 3 days ago (1 children)

how surprising! /s

but seriously, it's almost never one (1) thing that goes wrong when some idiotic mandate gets handed down from management.

a manager that mandates use of copilot (or any tool unfit for any given job), that's a manager that's going to mandate a bunch of other nonsensical shit that gets in the way of work. every time.

[–] [email protected] 2 points 3 days ago

It's an at-scale company, orders came from way above. As did RTO after 2 years full-at-home, etc, etc.

[–] [email protected] 32 points 4 days ago (2 children)

Agreed. I wanted to test a new config in my router yesterday, which is configured using scripts. So I thought it would be a good idea for ChatGPT to figure it out for me, instead of 3 hours of me reading documentation and trying tutorials. It was a test scenario, so I thought it might do well.

It did not do well at all. The scripts were mostly correct but often in the wrong order (referencing a thing before actually defining it). Sometimes the syntax would be totally wrong and it kept mixing version 6 syntax with version 7 syntax (I'm on 7). It will also make mistakes and when I point out the mistake it says Oh you are totally right, I made a mistake. Then goes on to explain what mistake it did and output new code. However more often than not the new code contained the exact same mistake. This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.

In the end I gave up on ChatGPT, searched for my testscenario and it turned out a friendly dude on a forum put together a tutorial. So I followed that and it almost worked right away. A couple of minutes of tweaking and testing and I got it working.

I'm afraid for a future where forums and such don't exist and sources like Reddit get fucked and nuked. In an AI driven world the incentive for creating new original content is way lower. So when AI doesn't know the answer, you are just hooped and have to re-invent the wheel yourself. In the long run this will destroy productivity and not give the gains people are hoping for at the moment.

[–] [email protected] 1 points 2 days ago

This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.

The one example could be flawless, but the output of an LLM is influenced by all of its input. 99.999% of that input is irrelevant to your situation, so of course it's going to degenerate the output.

What you (and everyone else) needs is a good search engine to find the needle in the haystack of human knowledge, you don't need that haystack ground down to dust to give you a needle-shaped piece of crap with slightly more iron than average.

[–] [email protected] 11 points 4 days ago (1 children)

It's like useful information grows as fruit from trees in a digital forest we call the Internet. However, the fruit spoils over time (becomes less relevant) and requires fertile soil (educated people being online) that can be eroded away (not investing in education or infrastructure) or paved over (intellectual property law). LLMs are like processed food created in factories that lack key characteristics of more nutritious fresh ingredients you can find at a farmer's market. Sure, you can feed more people (provide faster answers to questions) by growing a monocrop (training your LLM on a handful of generous people who publish under Creative Commons licenses like CC BY-SA on Stack Overflow), but you also risk a plague destroying your industry like how the Panama disease fungus destroyed nearly all Gros Michel banana farming (companies firing those generous software developers who “waste time” by volunteering to communities like Stack Overflow and replacing them with LLMs).

There's some solar punk ethical fusion of LLMs and sustainable cultivation of high quality information, but we're definitely not there yet.

[–] [email protected] 4 points 3 days ago

To extend your metaphor: be the squirrel in the digital forest. Compulsively bury acorns for others to find in time of need. Forget about most of the burial locations so that new trees are always sprouting and spreading. Do not get attached to a single trunk ; you are made to dance across the canopy.

[–] [email protected] 27 points 4 days ago* (last edited 4 days ago) (2 children)

When I had to get up to speed on a new language, it was very helpful. It's also great to write low to medium complexity scripts in python, powershell, bash, and making ansible tasks. That said I've been programming for ~30 years, and could have done those things myself if I needed, but it would take some time (a lot of it being looking up documentation and writing boilerplate code).

It's also nice for writing C# unit tests.

However, the times I've been stuck on my main languages, it's been utterly useless.

[–] [email protected] 29 points 4 days ago (1 children)

ChatGPT is extremely useful if you already know what you're doing. It's garbage if you're relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.

It's also probably useful for looking like you kinda know what you're doing as a junior in a new project. I've seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.

[–] [email protected] 2 points 1 hour ago

Yeah, I'm not even that down on using LLMs to search through and organize text that it was trained on. But in it's current iteration? It's fancy stack overflow, but stack overflow runs on like 6 servers. I'll be setting up some LLM stuff self hosted to play around with it, but I'm not ditching my brain's ability to write software any time soon.

[–] [email protected] 4 points 4 days ago

I love asking AI to generate a framework / structure for a project that I then barely use and then realize I shoulda just done it myself

[–] [email protected] 13 points 4 days ago

I've been using (mostly) Claude to help me write an application in a language I'm not experienced with (Rust). Mostly with helping me see what I did wrong with syntax or with the borrow checker. Coming from Java, Python, and C/C++, it's very easy to mismanage memory the exact way Rust requires it.

That being said, any new code that generates for me I end up having to fix 9 times out of 10. So in a weird way I've been learning more about Rust from having to correct code that's been generated by an LLM.

I still think LLMs for the next while will be mostly useful as a hyper-spell checker for code, and not for generating new code. I often find that I would have saved time if I just tackled the problem myself and not tried to reply on an LLM. Although sometimes an LLM can give me an idea on how to solve a problem.

[–] [email protected] 12 points 4 days ago* (last edited 4 days ago)

Same. It can generate credible-looking code, but I don't find it very useful. Here's what I've tried:

  • describe a function - takes longer to read the explanation than grok the code
  • generate tests - hallucinates arguments, doesn't do proper boundary checks, etc
  • looking up docs - mostly useful to find search terms for the real docs

The second was kind of useful since it provided the structure, but I still replaced 90% of it.

I'm still messing with it, but beyond solving "blank page syndrome," it's not that great. And for that, I mostly just copy something from elsewhere in the project anyway, which is often faster than going to the LLM.

I'm really bad at explaining what I want, because by the time I can do that, it's faster to just build it. That said, I'm a senior dev, so I've been around the block a bit.

[–] [email protected] 5 points 3 days ago

ChatGPT is perfect for learning Delphi.

[–] [email protected] 7 points 4 days ago* (last edited 4 days ago)

I used it a few days ago to translate a math formula into code.

Here is the formula: https://wikimedia.org/api/rest_v1/media/math/render/svg/126b6117904ad47459ad0caa791f296e69621782

It's not the most complicated thing. I could have done it. But it would take me some time. I just input the formula directly, the desired language and the result was well done and worked flawlessly.

It saved me some time typing around. And searching online a few things.

[–] [email protected] 1 points 3 days ago

Lately I have been using it for react code. It seems to be fairly decent at that. As a consequence when it does not work I get completely lost but despite this I think I have learned more with it then I would have without.