this post was submitted on 11 Feb 2025
601 points (99.0% liked)

Technology

62161 readers
4656 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 176 points 3 days ago (7 children)

I keep having to repeat this, but the conversation does keep going on a loop: LLMs aren't entirely useless and they're not search engines. You shouldn't ask it any questions you don't already know the answer to (or have the tools to verify, at least).

[–] [email protected] 89 points 3 days ago (3 children)

Yeah. Everyone forgot the second half of "Trust, but Verify". If I ask an LLM a question, I'm only doing it because I'm not 100% sure how to look up the info. Once it gives me the answer, I'm checking that answer with sources because it has given me a better ability to find what I was looking for. Trusting an LLM blindly is just as bad as going on Facebook for healthcare advice.

[–] [email protected] 32 points 3 days ago

Yep. Or because you can recognize the answer but can't remember it off the top of my head. Or to check for errors on a piece of text or code or a translation, or...

It's not "trust but verify", which I hate as a concept. It's just what the tech can and cannot do. It's not a search engine finding matches to a query inside a large set of content. It's a stochastic text generator giving you the most likely follow up based on its training dataset. It's very good autocorrect, not mediocre search.

[–] [email protected] 20 points 3 days ago (1 children)

I find LLMs very useful for setting up tech stuff. "How do I xyz in docker?" It does a great job of boiling together several disjointed How Tos that don't quite get me there into one actually usable one. I use it when googling and following articles isn't getting me anywhere, and it's often saved so much time.

[–] [email protected] 17 points 3 days ago

They are also amazing at generating configuration that's subtly wrong.

For example, if the bad LLM generated configurations I caught during pull requests reviews are any example, there are plenty of people with less experienced teams running broken kubernetes deployments.

Now, to be fair, inexperienced people would make similar mistakes, but inexperienced people are capable of learning with their mistakes.

[–] [email protected] 3 points 3 days ago (1 children)

I thought it was “butt verify” whoops

[–] [email protected] 6 points 3 days ago (1 children)
[–] [email protected] 4 points 3 days ago
[–] [email protected] 33 points 3 days ago (1 children)

Or if you're fine with non-factual answers. I've used chatgpt various times for different kinds of writing, and it's great for that. It can give you ideas, it can rephrase, it can generate lists, it can help you find the word you're trying to think of (usually).

But it's not magic. It's a text generator on steroids.

[–] [email protected] 10 points 3 days ago

Sure! Used as... you know, what it is, there's a lot of fun/useful stuff you can do. It's just both AIbro shills and people who have decided to make hating on this tech a core part of their personality have misrepresented that.

It's indeed very, very good text generation/text parsing. It is not a search engine, the signularity, Skynet or a replacement for human labor in the vast majority of use cases.

[–] [email protected] 3 points 3 days ago (1 children)

I had to tell DDG to not give me an AI summary of my search, so its clearly intended to be used as a search engine.

[–] [email protected] 16 points 3 days ago (1 children)

"Intended" is a weird choice there. Certainly the people selling them are selling them as search engines, even though they aren't one.

On DDG's implementation, though, you're just wrong. The search engine is still the search engine. They are using an LLM as a summary of the results. Which is also a bad implementation, because it will do a bad job at something you can do by just... looking down. But, crucially, the LLM is neither doing the searching nor generating the results themselves.

[–] [email protected] 1 points 3 days ago (1 children)

What do you mean its not generating the results? If the summation isn't generated, wheres it come from?

[–] [email protected] 6 points 3 days ago (1 children)

I dont want to speak for OP but I think they meant its not generating the search results using an LLM

[–] [email protected] 1 points 3 days ago (1 children)

Maybe I just don't know what "generating results" means. You query a search engine, and it generates results as a page of links. I don't understand how generating a page of links is fundamentally different from generating a summation of the results?

[–] [email protected] 7 points 3 days ago (1 children)

Its a very different process. Having work on search engines before, I can tell you that the word generate means something different in this context. It means, in simple terms, to match your search query with a bunch of results, gather links to said results, and then send them to the user to be displayed

[–] [email protected] 2 points 3 days ago (1 children)

then send them to the user to be displayed

This is where my understanding breaks. Why would displaying it as a summary mean the backend process is no longer a search engine?

[–] [email protected] 4 points 3 days ago (1 children)

The LLM is going over the search results, taking them as a prompt and then generating a summary of the results as an output.

The search results are generated by the good old search engine, the "AI summary" option at the top is just doing the reading for you.

And of course if the answer isn't trivial, very likely generating an inaccurate or incorrect output from the inputs.

But none of that changes how the underlying search engine works. It's just doing additional work on the same results the same search engine generates.

EDIT: Just to clarify, DDG also has a "chat" service that, as far as I can tell, is just an UI overlay over whatever model you select. That just works the same way as all the AI chatbots you can use online or host locally and I presume it's not what we're talking about.

[–] [email protected] 3 points 3 days ago (1 children)

I see, you're splitting the UI from the backend as two different things, and Im seeing them as parts to a whole.

[–] [email protected] 4 points 3 days ago (1 children)

Well, yeah, there are multiple things feeding into the results page they generate for you. Not just two. There's the search results, there's an algorithmic widget that shows different things (so a calculator if you input some math, a translation box if you input a translation request, a summary of Wikipedia or IMDB if you search for a movie or a performer, that type of thing). And there is a pop-up window with an LLM-generated summary of the search results now.

Those are all different pieces. Your search resutls for "3 divided by 7" aren't different because they also pop up a calculator for you at the top of the page.

[–] [email protected] 4 points 3 days ago

Yeah, for some reason I was thinking you were trying to say that bolting on widgets made it no longer a search engine.

[–] [email protected] 2 points 3 days ago

LLMs are good for some searches or clarification that the original website doesn't say. Ex the "BY" attribute in creative commons being acronymed to "BY" (by John Doe) and not "AT" (attributed to John Doe)