I keep having to repeat this, but the conversation does keep going on a loop: LLMs aren't entirely useless and they're not search engines. You shouldn't ask it any questions you don't already know the answer to (or have the tools to verify, at least).
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Yeah. Everyone forgot the second half of "Trust, but Verify". If I ask an LLM a question, I'm only doing it because I'm not 100% sure how to look up the info. Once it gives me the answer, I'm checking that answer with sources because it has given me a better ability to find what I was looking for. Trusting an LLM blindly is just as bad as going on Facebook for healthcare advice.
Yep. Or because you can recognize the answer but can't remember it off the top of my head. Or to check for errors on a piece of text or code or a translation, or...
It's not "trust but verify", which I hate as a concept. It's just what the tech can and cannot do. It's not a search engine finding matches to a query inside a large set of content. It's a stochastic text generator giving you the most likely follow up based on its training dataset. It's very good autocorrect, not mediocre search.
I find LLMs very useful for setting up tech stuff. "How do I xyz in docker?" It does a great job of boiling together several disjointed How Tos that don't quite get me there into one actually usable one. I use it when googling and following articles isn't getting me anywhere, and it's often saved so much time.
They are also amazing at generating configuration that's subtly wrong.
For example, if the bad LLM generated configurations I caught during pull requests reviews are any example, there are plenty of people with less experienced teams running broken kubernetes deployments.
Now, to be fair, inexperienced people would make similar mistakes, but inexperienced people are capable of learning with their mistakes.
I thought it was “butt verify” whoops
Trust butt, verify
✅ Verified
Or if you're fine with non-factual answers. I've used chatgpt various times for different kinds of writing, and it's great for that. It can give you ideas, it can rephrase, it can generate lists, it can help you find the word you're trying to think of (usually).
But it's not magic. It's a text generator on steroids.
Sure! Used as... you know, what it is, there's a lot of fun/useful stuff you can do. It's just both AIbro shills and people who have decided to make hating on this tech a core part of their personality have misrepresented that.
It's indeed very, very good text generation/text parsing. It is not a search engine, the signularity, Skynet or a replacement for human labor in the vast majority of use cases.
I had to tell DDG to not give me an AI summary of my search, so its clearly intended to be used as a search engine.
"Intended" is a weird choice there. Certainly the people selling them are selling them as search engines, even though they aren't one.
On DDG's implementation, though, you're just wrong. The search engine is still the search engine. They are using an LLM as a summary of the results. Which is also a bad implementation, because it will do a bad job at something you can do by just... looking down. But, crucially, the LLM is neither doing the searching nor generating the results themselves.
LLMs are good for some searches or clarification that the original website doesn't say. Ex the "BY" attribute in creative commons being acronymed to "BY" (by John Doe) and not "AT" (attributed to John Doe)
The weirdness came partway through, when the ad actually showed Google Gemini in action. It told the cheese vendor that Gouda accounts for "50 to 60 percent of the world's cheese consumption." Now, Gouda's hardly a hardcore real head pick like Roquefort or BellaVitano, but there's also no way it's pulling in cheddar or mozzarella numbers. Travel blogger Nate Hake and Google-focused Twitter account Goog Enough documented the erroneous initial version of the ad, but Google responded by quietly swapping in a more accurate Gemini-suggested blurb in all live versions of the ad, including the one that aired during the Super Bowl.
They should have kept quiet and let Google show how shit they are on live TV
This is like the dozenth time Google put hallucinations in their AI presentation/AD. They just don‘t care.
Especially considering that the "pointing out of said hallucinations" comes much later than when they're shared. And NEVER made it as far and wide as the initial bullshit.
Stop calling gpt ai
That's the inaccurate name everyone's settled on. Kinda like how "sentient" is widely used to mean "sapient" despite being two different things.
I made a smartass comment earlier comparing AI to fire, but it's really my favorite metaphor for it - and it extends to this issue. Depending on how you define it, fire seems to meet the requirements for being alive. It tends to come up in the same conversations that question whether a virus is alive. I think it's fair to think of LLMs (particularly the current implementations) as intelligent - just in the same way we think of fire or a virus as alive. Having many of the characteristics of it, but being a step removed.
That is an extremely apt parallel!
(I'm stealing it)
How is it not AI? Just because it's not AGI doesn't mean it's not AI. AI encompasses a lot of things.
You put a few GPTs in a trenchcoat and they're obviously AI. I can't speak about openAIs offerings since I won't use it as a cloud service, but local deepseek I've tried is certainly AI. People are moving the goalposts constantly, with what seems to me a determination to avoid seeing the future that's already here. Download deepseek-v2-coder 16b if you have 16GB of ram and 10gb of storage space and see for yourselves, it's ridiculously low requirements for what it can do, it uses 50% of four cpu cores for like 15 seconds to solve a problem with detailed reasoning steps.
This article is about Gemini, not GPT. The generic term is LLM: Large Language Model.
I totally get all the concerns related to AI. However, the bandwagon of: "look it made a mistake, it's useless!" is a bit silly.
First of all, AI is constantly improving. Remember everyone laughing at AI's mangled fingers? Well, that has been fixed some time ago. Now pictures of people are pretty much indistinguishable from real ones.
Second, people also make critical mistakes, plenty at that. The question is not whether AI can be absolutely accurate. The question is whether AI can make on average fewer mistakes than human.
I hate the idea of AI replacing everything and everyone. However, pretending that AI will not be eventually faster, better, cheeper and more accurate that most humans is wishful thinking. I honestly think that our only hope is legislation, not the desperate wish that AI will always need human supervision and input to be correct.
there's also the problem of techbros and companies everywhere thinking that AI is omniscient and can replace every other profession. who needs a human journalist when you can train an AI on their work (because they work for you and their work is your property ofc) and then just fire them all because you have a perfect AI that you can just set to run forever without checkig its work and make infinite money :)
And then the articles will only be clicked and commented by bots after a while. Dead internet here we come!
Slightly off topic, but the writing on this article is horrible. Optimizing for Google engagement, it seems. Ironically, an AI would probably have produced something vastly more readable.
Flames burn and smoke asphyxiates, perfectly highlighting why relying on fire is a bad idea.
begs the question
Not it doesn't. Did an Ai slop this story too?
It's an obsolete usage of "beg" that's now preserved only in that particular set phrase. One of English's many linguistic fossils, which you should learn more about before trying to critique anyone's language use.
It’s an obsolete usage of “beg”
It's a misuse of the cliche "begs the question" (which goes back to medieval Latin petitio principii) which is used to call out a form of fallacious reasoning where the desired answer is smuggled into the assumptions. And yeah, that use of "beg" is obsolete, but even worse, the whole phrase is now misused to mean "prompts the question."
the whole phrase is now misused
Not it doesn't. Did an Ai slop this story too?
No it doesn't. Did an AI slop this story too?
Why post the same comment?
Fair question.
That user goes around issuing weird and pointless corrections to other people's comments, even sometimes to the point of personally insulting people who make grammatical or spelling errors – often common ones that non-native speakers make, so I thought it'd be funny to do the same in turn, since their comment history is filled with much of the same.
I wouldn't usually do it, it's a pointless exercise IMO.
Can take the user off reddit, but the reddit never leaves the user