this post was submitted on 26 Feb 2025
873 points (96.7% liked)

Technology

63313 readers
5190 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"The real benchmark is: the world growing at 10 percent," he added. "Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."

Needless to say, we haven't seen anything like that yet. OpenAI's top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail's pace and requires constant supervision.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 48 points 23 hours ago (4 children)

I've been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.

If you don't know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That's no one's fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.

Same for coding, if you understand what your code does, it's a helpful tool for unsticking part of a problem, it can't write the whole thing from scratch

[–] [email protected] 2 points 5 hours ago (1 children)

Exactly - I find AI tools very useful and they save me quite a bit of time, but they're still tools. Better at some things than others, but the bottom line is that they're dependent on the person using them. Plus the more limited the problem scope, the better they can be.

[–] [email protected] 1 points 5 hours ago (1 children)

Yes, but the problem is that a lot of these AI tools are very easy to use, but the people using them are often ill-equipped to judge the quality of the result. So you have people who are given a task to do, and they choose an AI tool to do it and then call it done, but the result is bad and they can't tell.

[–] [email protected] 1 points 3 hours ago

True, though this applies to most tools, no? For instance, I'm forced to sit through horrible presentations beause someone were given a task to do, they created a Powerpoint (badly) and gave a presentation (badly). I don't know if this is inherently a problem with AI...

[–] [email protected] 5 points 17 hours ago (1 children)

LLMs could be useful for translation between programming languages. I asked it to recently for server code given a client code in a different language and the LLM generated code was spot on!

[–] [email protected] 1 points 1 hour ago* (last edited 1 hour ago)

I remain skeptical of using solely LLMs for this, but it might be relevant: DARPA is looking into their usage for C to Rust translation. See the TRACTOR program.

[–] [email protected] 14 points 22 hours ago* (last edited 22 hours ago)

For coding it's also useful for doing the menial grunt work that's easy but just takes time.

You're not going to replace a senior dev with it, of course, but it's a great tool.

My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.