this post was submitted on 05 Feb 2025
313 points (82.4% liked)

Technology

62161 readers
4656 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 77 points 1 week ago (12 children)

Because you're using it wrong. It's good for generative text and chains of thought, not symbolic calculations including math or linguistics

[–] [email protected] 28 points 1 week ago (1 children)

Because you're using it wrong.

No, I think you mean to say it’s because you’re using it for the wrong use case.

Well this tool has been marketed as if it would handle such use cases.

I don’t think I’ve actually seen any AI marketing that was honest about what it can do.

I personally think image recognition is the best use case as it pretty much does what it promises.

load more comments (1 replies)
[–] [email protected] 6 points 1 week ago (15 children)

Give me an example of how you use it.

[–] [email protected] 25 points 1 week ago* (last edited 1 week ago) (9 children)

Writing customer/company-wide emails is a good example. "Make this sound better: we're aware of the outage at Site A, we are working as quick as possible to get things back online"

Dumbing down technical information "word this so a non-technical person can understand: our DHCP scope filled up and there were no more addresses available for Site A, which caused the temporary outage for some users"

Another is feeding it an article and asking for a summary, https://hackingne.ws/ does that for its Bsky posts.

Coding is another good example, "write me a Python script that moves all files in /mydir to /newdir"

Asking for it to summarize a theory or protocol, "explain to me why RIP was replaced with RIPv2, and what problems people have had since with RIPv2"

[–] [email protected] 25 points 1 week ago (3 children)

Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online

How does this work in practice? I suspect you're just going to get an email that takes longer for everyone to read, and doesn't give any more information (or worse, gives incorrect information). Your prompt seems like what you should be sending in the email.

If the model (or context?) was good enough to actually add useful, accurate information, then maybe that would be different.

I think we'll get to the point really quickly where a nice concise message like in your prompt will be appreciated more than the bloated, normalised version, which people will find insulting.

[–] [email protected] 1 points 2 days ago

Yes, people are using it as the least efficient communication protocol ever.

One side asks an LLM to expand a summary into a fluff filled email, and the other side asks an LLM to reduce the long email to a summary.

[–] [email protected] 17 points 1 week ago* (last edited 1 week ago) (2 children)

Yeah, normally my "Make this sound better" or "summarize this for me" is a longer wall of text that I want to simplify, I was trying to keep my examples short. Talking to non-technical people about a technical issue is not the easiest for me, AI has helped me dumb it down when sending an email, and helps correct my shitty grammar at times.

As for accuracy, you review what it gives you, you don't just copy and send it without review. Also you will have to tweak some pieces that it gives out where it doesn't make the most sense, such as if it uses wording you wouldn't typically use. It is fairly accurate though in my use-cases.

Hallucinations are a thing, so validating what it spits out is definitely needed.

Another example: if you feel your email is too stern or gives the wrong tone, I've used it for that as well. "Make this sound more relaxed: well maybe if you didn't turn off the fucking server we wouldn't of had this outage!" (Just a silly example)

[–] [email protected] 19 points 1 week ago (2 children)

As for accuracy, you review what it gives you, you don't just copy and send it without review.

Yeah, I don't get why so many people seem to not get that.

It's like people who were against Intellisense in IDEs because "What if it suggests the wrong function?"...you still need to know what the functions do. If you find something you're unfamiliar with, you check the documentation. You don't just blindly accept it as truth.

Just because it can't replace a person's job doesn't mean it's worthless as a tool.

[–] [email protected] 8 points 1 week ago (1 children)

The issue is that AI is being invested in as if it can replace jobs. That's not an issue for anyone who wants to use it as a spellchecker, but it is an issue for the economy, for society, and for the planet, because billions of dollars of computer hardware are being built and run on the assumption that trillions of dollars of payoff will be generated.

And correcting someone's tone in an email is not, and will never be, a trillion dollar industry.

[–] [email protected] 7 points 1 week ago (1 children)

That's a very different problem than the one in the OP

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 6 points 1 week ago (4 children)

I think these are actually valid examples, albeit ones that come with a really big caveat; you're using AI in place of a skill that you really should be learning for yourself. As an autistic IT person, I get the struggle of communicating with non-technical and neurotypical people, especially clients who you have to be extra careful with. But the reality is, you can't always do all your communication by email. If you always rely on the AI to correct your tone or simplify your language, you're choosing not to build an essential skill that is every bit as important to doing your job well as it is to know how to correctly configure an ACL on a Cisco managed switch.

That said, I can also see how relying on the AI at first can be a helpful learning tool as you build those skills. There's certainly an argument that by using tools, but paying attention to the output of those tools, you build those skills for yourself. Learning by example works. I think used in that way, there's potentially real value there.

Which is kind of the broader story with Gen AI overall. It's not that it can never be useful; it's that, at best, it can only ever aspire to "useful." No one, yet, has demonstrated any ability to make AI "essential" and the idea that we should be investing hundreds of billions of dollars into a technology that is, on its best days, mildly useful, is sheer fucking lunacy.

load more comments (4 replies)
load more comments (1 replies)
load more comments (8 replies)
[–] [email protected] 15 points 1 week ago* (last edited 1 week ago)

i'm still not entirely sold on them but since i'm currently using one that the company subscribes to i can give a quick opinion:

i had an idea for a code snippet that could save be some headache (a mock for primitives in lua, to be specific) but i foresaw some issues with commutativity (aka how to make sure that a + b == b + a). so i asked about this, and the llm created some boilerplate to test this code. i've been chatting with it for about half an hour and testing the code it produces, and had it expand the idea to all possible metamethods available on primitive types, together with about 50 test cases with descriptive assertions. i've now run into an issue where the __eq metamethod isn't firing correctly when one of the operands is a primitive rather than a mock, and after having the llm link me to the relevant part of the docs, that seems to be a feature of the language rather than a bug.

so in 30 minutes i've gone from a loose idea to a well-documented proof-of-concept to a roadblock that can't really be overcome. complete exploration and feasibility study, fully tested, in less than an hour.

[–] [email protected] 7 points 1 week ago

One thing which I find useful is to be able to turn installation/setup instructions into ansible roles and tasks. If you're unfamiliar, ansible is a tool for automated configuration for large scale server infrastructures. In my case I only manage two servers but it is useful to parse instructions and convert them to ansible, helping me learn and understand ansible at the same time.

Here is an example of instructions which I find interesting: how to setup docker for alpine Linux: https://wiki.alpinelinux.org/wiki/Docker

Results are actually quite good even for smaller 14B self-hosted models like the distilled versions of DeepSeek, though I'm sure there are other usable models too.

To assist you in programming (both to execute and learn) I find it helpful too.

I would not rely on it for factual information, but usually it does a decent job at pointing in the right direction. Another use i have is helpint with spell-checking in a foreign language.

load more comments (12 replies)
load more comments (10 replies)
[–] [email protected] 53 points 1 week ago (3 children)

I've already had more than one conversation where people quote AI as if it were a source, like quoting google as a source. When I showed them how it can sometimes lie and explain it's not a primary source for anything I just get that blank stare like I have two heads.

[–] [email protected] 13 points 1 week ago

Me too. More than once on a language learning subreddit for my first language: "I asked ChatGPT whether this was correct grammar in German, it said no, but I read this counterexample", then everyone correctly responded "why the fuck are you asking ChatGPT about this".

load more comments (2 replies)
[–] [email protected] 38 points 1 week ago* (last edited 1 week ago) (2 children)

There is an alternative reality out there where LLMs were never marketed as AI and were marketed as random generator.

In that world, tech savvy people would embrace this tech instead of having to constantly educate people that it is in fact not intelligence.

[–] [email protected] 6 points 1 week ago

That was this reality. Very briefly. Remember AI Dungeon and the other clones that were popular prior to the mass ml marketing campaigns of the last 2 years?

load more comments (1 replies)
[–] [email protected] 37 points 1 week ago

A guy is driving around the back woods of Montana and he sees a sign in front of a broken down shanty-style house: 'Talking Dog For Sale.'

He rings the bell and the owner appears and tells him the dog is in the backyard.

The guy goes into the backyard and sees a nice looking Labrador Retriever sitting there.

"You talk?" he asks.

"Yep" the Lab replies.

After the guy recovers from the shock of hearing a dog talk, he says, "So, what's your story?"

The Lab looks up and says, "Well, I discovered that I could talk when I was pretty young. I wanted to help the government, so I told the CIA. In no time at all they had me jetting from country to country, sitting in rooms with spies and world leaders, because no one figured a dog would be eavesdropping, I was one of their most valuable spies for eight years running... but the jetting around really tired me out, and I knew I wasn't getting any younger so I decided to settle down. I signed up for a job at the airport to do some undercover security, wandering near suspicious characters and listening in. I uncovered some incredible dealings and was awarded a batch of medals. I got married, had a mess of puppies, and now I'm just retired."

The guy is amazed. He goes back in and asks the owner what he wants for the dog.

"Ten dollars" the guy says.

"Ten dollars? This dog is amazing! Why on Earth are you selling him so cheap?"

"Because he's a liar. He's never been out of the yard."

[–] [email protected] 36 points 1 week ago (2 children)

I think I have seen this exact post word for word fifty times in the last year.

[–] [email protected] 15 points 1 week ago (6 children)

Has the number of "r"s changed over that time?

load more comments (6 replies)
load more comments (1 replies)
[–] [email protected] 27 points 1 week ago* (last edited 1 week ago) (1 children)

These models don't get single characters but rather tokens repenting multiple characters. While I also don't like the "AI" hype, this image is also very 1 dimensional hate and misreprents the usefulness of these models by picking one adversarial example.

Today ChatGPT saved me a fuckton of time by linking me to the exact issue on gitlab that discussed the issue I was having (full system freezes using Bottles installed with flatpak on Arch). This was the URL it came up with after explaining the problem and giving it the first error I found in dmesg: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/issues/110

This issue is one day old. When I looked this shit up myself I found exactly nothing useful on both DDG or Google. After this ChatGPT also provided me with the information that the LTS kernel exists and how to install it. Obviously I verified that stuff before using it, because these LLMs have their limits. Now my system works again, and figuring this out myself would've cost me hours because I had no idea what broke. Was it flatpak, Nvidia, the kernel, Wayland, Bottles, some random shit I changed in a config file 2 years ago? Well thanks to ChatGPT I know.

They're tools, and they can provide new insights that can be very useful. Just don't expect them to always tell the truth, or to actually be human-like

[–] [email protected] 6 points 1 week ago (1 children)

Just don't expect them to always tell the truth, or to actually be human-like

I think the point of the post is to call out exactly that: people preaching AI as replacing humans

load more comments (1 replies)
[–] [email protected] 24 points 1 week ago (2 children)

"My hammer is not well suited to cut vegetables" 🤷

There is so much to say about AI, can we move on from "it can't count letters and do math" ?

[–] [email protected] 7 points 1 week ago (1 children)

I get that it's usually just a dunk on AI, but it is also still a valid demonstration that AI has pretty severe and unpredictable gaps in functionality, in addition to failing to properly indicate confidence (or lack thereof).

People who understand that it's a glorified autocomplete will know how to disregard or prompt around some of these gaps, but this remains a litmus test because it succinctly shows you cannot trust an LLM response even in many "easy" cases.

load more comments (1 replies)
[–] [email protected] 6 points 1 week ago (4 children)

But the problem is more "my do it all tool randomly fails at arbitrary tasks in an unpredictable fashion" making it hard to trust as a tool in any circumstances.

[–] [email protected] 1 points 5 days ago

Your not supposed to just trust it. Your supposed to test the solution it gives you. Yes that makes it not useful for some things. But still immensely useful for other applications and a lot of times it gives you a really great jumping off point to solving whatever your problem is.

load more comments (3 replies)
[–] [email protected] 21 points 1 week ago* (last edited 1 week ago) (8 children)

That happens when do you not understand what is a llm, or what its usecases are.

This is like not being impressed by a calculator because it cannot give a word synonym.

load more comments (8 replies)
[–] [email protected] 19 points 1 week ago (1 children)

It's predictive text on speed. The LLMs currently in vogue hardly qualify as A.I. tbh..

[–] [email protected] 10 points 1 week ago

Still, it’s kinda insane how two years ago we didn’t imagine we would be instructing programs like “be helpful but avoid sensitive topics”.

That was definitely a big step in AI.

[–] [email protected] 15 points 1 week ago

Doc: That’s an interesting name, Mr…

Fletch: Babar.

Doc: Is that with one B or two?

Fletch: One. B-A-B-A-R.

Doc: That’s two.

Fletch: Yeah, but not right next to each other, that’s what I thought you meant.

Doc: Isn’t there a children’s book about an elephant named Babar.

Fletch: Ha, ha, ha. I wouldn’t know. I don’t have any.

Doc: No children?

Fletch: No elephant books.

[–] [email protected] 15 points 1 week ago (8 children)

This is a bad example.. If I ask a friend "is strawberry spelled with one or two r's"they would think I'm asking about the last part of the word.

The question seems to be specifically made to trip up LLMs. I've never heard anyone ask how many of a certain letter is in a word. I've heard people ask how you spell a word and if it's with one or two of a specific letter though.

If you think of LLMs as something with actual intelligence you're going to be very unimpressed.. It's just a model to predict the next word.

[–] [email protected] 27 points 1 week ago (6 children)

If you think of LLMs as something with actual intelligence you're going to be very unimpressed.. It's just a model to predict the next word.

This is exactly the problem, though. They don’t have “intelligence” or any actual reasoning, yet they are constantly being used in situations that require reasoning.

load more comments (6 replies)
load more comments (7 replies)
[–] [email protected] 13 points 1 week ago (1 children)

It's like someone who has no formal education but has a high level of confidence and eavesdrops on a lot of random conversations.

load more comments (1 replies)
[–] [email protected] 12 points 1 week ago* (last edited 1 week ago) (2 children)

Here's my guess, aside from highlighted token issues:

We all know LLMs train on human-generated data. And when we ask something like "how many R's" or "how many L's" is in a given word, we don't mean to count them all - we normally mean something like "how many consecutive letters there are, so I could spell it right".

Yes, the word "strawberry" has 3 R's. But what most people are interested in is whether it is "strawberry" or "strawbery", and their "how many R's" refers to this exactly, not the entire word.

load more comments (2 replies)
[–] [email protected] 10 points 1 week ago

Yeah and you know I always hated this screwdrivers make really bad hammers.

[–] [email protected] 10 points 1 week ago (2 children)

You asked a stupid question and got a stupid response, seems fine to me.

load more comments (2 replies)
[–] [email protected] 9 points 1 week ago

I asked Gemini if the quest has an SD slot. It doesn't, but Gemini said it did. Checking the source it was pulling info from the vive user manual

[–] [email protected] 8 points 1 week ago* (last edited 1 week ago) (2 children)

Works fine for me in o3-mini-high:

Counting letters in “strawberry”

Alright, I’m checking: the word “strawberry” is spelled S T R A W B E R R Y. Let me count the letters: S (1), T (2), R (3), A (4), W (5), B (6), E (7), R (8), R (9), Y (10). There are three R’s: in positions 3, 8, and 9. So, the answer is 3. Even if we ignore case, the count still holds. Therefore, there are 3 r’s in “strawberry.”

load more comments (2 replies)
[–] [email protected] 7 points 1 week ago (2 children)

This is literally just a tokenization artifact. If I asked you how many r’s are in /0x5273/0x7183 you’d be confused too.

load more comments (2 replies)
[–] [email protected] 6 points 1 week ago (1 children)

I asked mistral/brave AI and got this response:

How Many Rs in Strawberry

The word "strawberry" contains three "r"s. This simple question has highlighted a limitation in large language models (LLMs), such as GPT-4 and Claude, which often incorrectly count the number of "r"s as two. The error stems from the way these models process text through a process called tokenization, where text is broken down into smaller units called tokens. These tokens do not always correspond directly to individual letters, leading to errors in counting specific letters within words.

load more comments (1 replies)
load more comments
view more: next ›