this post was submitted on 13 Feb 2025
934 points (97.4% liked)

Technology

62161 readers
4656 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 18 minutes ago

Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.

[–] [email protected] 14 points 6 hours ago (2 children)

Also your ability to search information on the web. Most people I've seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely

[–] [email protected] 2 points 59 minutes ago

To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I've started using kagi and I'm trying to be more intentional about it but after a bit of searching it's often easier to just ask claude

[–] [email protected] 6 points 5 hours ago

Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.

[–] [email protected] 2 points 4 hours ago (1 children)

I felt it happen realtime everytime, I still use it for questions but ik im about to not be able to think crtically for the rest of the day, its a last resort if I cant find any info online or any response from discords/forums

Its still useful for coding imo, I still have to think critically, it just fills some tedious stuff in.

[–] [email protected] 2 points 4 hours ago (1 children)

It was hella useful for research in college and it made me think more because it kept giving me useful sources and telling me the context and where to find it, i still did the work and it actually took longer because I wouldnt commit to topics or keep adding more information. Just dont have it spit out your essay, it sucks at that, have it spit out topics and info on those topics with sources, then use that to build your work.

[–] [email protected] 2 points 4 hours ago (1 children)

Google used to be good, but this is far superior, I used bings chatgpt when I was in school idk whats good now (it only gave a paragraph max and included sources for each sentence)

[–] [email protected] 1 points 3 hours ago (1 children)

How did you manage to actually use bing gpt? I've tried like 20 times and it's wrong the majority of the time

[–] [email protected] 1 points 1 hour ago (1 children)

It worked for school stuff well, I always added "prioritize factual sources with .edu " or something like that. Specify that it is for a research paper and tell it to look for stuff how you would.

[–] [email protected] 1 points 44 minutes ago (1 children)

Only time I told it to be factual was looking at 4k laptops, it gave me 5 laptops, 4 marked as 4k, 0 of the 5 were actually 4k.

That was last year though so maybe it's improved by now

[–] [email protected] 1 points 7 minutes ago

I wouldnt use it on current info like that only scraped data, like using it on history classes itll be useful, using it for sales right now definitely not

[–] [email protected] 5 points 7 hours ago (2 children)

Is that it?

One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).

Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis...

As usual, it is not the AI tool who could fuck our critical thinking but ourselves.

[–] [email protected] 2 points 5 hours ago

I see it exactly the same, I bet you find similar articles about calculators, PCs, internet, smartphones, smartwatches, etc

Society will handle it sooner or later

[–] [email protected] 2 points 6 hours ago* (last edited 6 hours ago) (1 children)

I love how they chose the term "hallucinate" instead of saying it fails or screws up.

[–] [email protected] 2 points 5 hours ago

Because the term fits way better…

[–] [email protected] 12 points 9 hours ago (1 children)

Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.

It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.

[–] [email protected] 7 points 9 hours ago (1 children)

I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers

[–] [email protected] 6 points 9 hours ago (1 children)

Exactly. It’s great for that, as long as you know what you want it to say and can verify it.

The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.

It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.

[–] [email protected] 5 points 8 hours ago (1 children)

As you mentioned tho, not really specific to LLMs at all

[–] [email protected] 4 points 8 hours ago (1 children)

Yeah it’s just escalating the issue due to its universal availability. It’s being used in lieu of Google by many people, who blindly trust whatever it spits out.

If it had a high technological floor of entry, it wouldn’t be as influential to the general public as it is.

[–] [email protected] 1 points 7 hours ago

It's such a double edged sword though, Google is a good example, I became a netizen at a very young age and learned how to properly search for information over time.

Unfortunately the vast majority of the population over the last two decades have not put in that effort, and it shows lol.

Fundamentally, I do not believe in arbitrarily deciding who can and can not have access to information though.

[–] [email protected] 5 points 7 hours ago (2 children)

Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you're likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it's purpose and it's up to you the operator to make sure you got the edges squared(so to speak).

[–] [email protected] 1 points 5 hours ago

I think, this is only a issue in the beginning, people will sooner or later realise that they can’t blindly trust an LMM output and how to create prompts to verify prompts (or better said prove that not enough relevant data was analysed and prove that it is hallucinations)

[–] [email protected] 1 points 7 hours ago

Well there's people that followed apple maps into lakes and other things so the precedent is there already(I have no doubt it also existed before that)

You would need to heavily regulate it and thats not happening anytime soon if ever

[–] [email protected] 2 points 6 hours ago* (last edited 6 hours ago)

Their reasoning seems valid - common sense says the less you do something the more your skill atrophies - but this study doesn't seem to have measured people's critical thinking skills. It measured how the subjects felt about their skills. People who feel like they're good at a job might not feel as adequate when their job changes to evaluating someone else's work. The study said the subjects felt that they used their analytical skills less when they had confidence in the AI. The same thing happens when you get a human assistant - as your confidence in their work grows you scrutinize it less. But that doesn't mean you yourself become less skillful. The title saying use of AI "kills" critical thinking skill isn't justified, and is very clickbaity IMO.

[–] [email protected] 2 points 7 hours ago

The definition of critical thinking is not relying on only one source. Next rain will make you wet keep tuned.

[–] [email protected] 2 points 7 hours ago

Microsoft said it so I guess it must be true then 🤷‍♂️

[–] gramie 13 points 13 hours ago (3 children)

I was talking to someone who does software development, and he described his experiments with AI for coding.

He said that he was able to use it successfully and come to a solution that was elegant and appropriate.

However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.

[–] [email protected] 7 points 9 hours ago* (last edited 9 hours ago) (1 children)

how does he know that the solution is elegant and appropriate?

[–] gramie 1 points 6 hours ago (1 children)

Because he has the knowledge and experience to completely understand the final product. It used an approach that he hadn't thought of, that is better suited to the problem.

[–] [email protected] 1 points 5 hours ago

Lol, how can he not learn from that??

[–] [email protected] 16 points 12 hours ago (1 children)

I'm a senior software dev that uses AI to help me with my job daily. There are endless tools in the software world all with their own instructions on how to use them. Often they have issues and the solutions aren't included in those instructions. It used to be that I had to go hunt down any references to the problem I was having though online forums in the hopes that somebody else figured out how to solve the issue but now I can ask AI and it generally gives me the answer I'm looking for.

If I had AI when I was still learning core engineering concepts I think shortcutting the learning process could be detrimental but now I just need to know how to get X done specifically with Y this one time and probably never again.

[–] [email protected] 5 points 9 hours ago

100% this. I generally use AI to help with edge cases in software or languages that I already know well or for situations where I really don't care to learn the material because I'm never going to touch it again. In my case, for python or golang, I'll use AI to get me started in the right direction on a problem, then go read the docs to develop my solution. For some weird ugly regex that I just need to fix and never touch again I just ask AI, test the answer it gices, then play with it until it works because I'm never going to remember how to properly use a negative look-behind in regex when I need it again in five years.

I do think AI could be used to help the learning process, too, if used correctly. That said, it requires the student to be proactive in asking the AI questions about why something works or doesn't, then going to read additional information on the topic.

[–] [email protected] 5 points 10 hours ago

I feel you, but I've asked it why questions too.

[–] [email protected] 2 points 8 hours ago

Linux study, finds that relying on MS kills critical thinking skills. 😂

load more comments
view more: next ›