Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Also your ability to search information on the web. Most people I've seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely
To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I've started using kagi and I'm trying to be more intentional about it but after a bit of searching it's often easier to just ask claude
Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.
I felt it happen realtime everytime, I still use it for questions but ik im about to not be able to think crtically for the rest of the day, its a last resort if I cant find any info online or any response from discords/forums
Its still useful for coding imo, I still have to think critically, it just fills some tedious stuff in.
It was hella useful for research in college and it made me think more because it kept giving me useful sources and telling me the context and where to find it, i still did the work and it actually took longer because I wouldnt commit to topics or keep adding more information. Just dont have it spit out your essay, it sucks at that, have it spit out topics and info on those topics with sources, then use that to build your work.
Google used to be good, but this is far superior, I used bings chatgpt when I was in school idk whats good now (it only gave a paragraph max and included sources for each sentence)
How did you manage to actually use bing gpt? I've tried like 20 times and it's wrong the majority of the time
It worked for school stuff well, I always added "prioritize factual sources with .edu " or something like that. Specify that it is for a research paper and tell it to look for stuff how you would.
Only time I told it to be factual was looking at 4k laptops, it gave me 5 laptops, 4 marked as 4k, 0 of the 5 were actually 4k.
That was last year though so maybe it's improved by now
I wouldnt use it on current info like that only scraped data, like using it on history classes itll be useful, using it for sales right now definitely not
Is that it?
One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).
Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis...
As usual, it is not the AI tool who could fuck our critical thinking but ourselves.
I see it exactly the same, I bet you find similar articles about calculators, PCs, internet, smartphones, smartwatches, etc
Society will handle it sooner or later
I love how they chose the term "hallucinate" instead of saying it fails or screws up.
Because the term fits way better…
Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.
It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.
I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers
Exactly. It’s great for that, as long as you know what you want it to say and can verify it.
The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.
It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.
As you mentioned tho, not really specific to LLMs at all
Yeah it’s just escalating the issue due to its universal availability. It’s being used in lieu of Google by many people, who blindly trust whatever it spits out.
If it had a high technological floor of entry, it wouldn’t be as influential to the general public as it is.
It's such a double edged sword though, Google is a good example, I became a netizen at a very young age and learned how to properly search for information over time.
Unfortunately the vast majority of the population over the last two decades have not put in that effort, and it shows lol.
Fundamentally, I do not believe in arbitrarily deciding who can and can not have access to information though.
Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you're likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it's purpose and it's up to you the operator to make sure you got the edges squared(so to speak).
I think, this is only a issue in the beginning, people will sooner or later realise that they can’t blindly trust an LMM output and how to create prompts to verify prompts (or better said prove that not enough relevant data was analysed and prove that it is hallucinations)
Well there's people that followed apple maps into lakes and other things so the precedent is there already(I have no doubt it also existed before that)
You would need to heavily regulate it and thats not happening anytime soon if ever
Their reasoning seems valid - common sense says the less you do something the more your skill atrophies - but this study doesn't seem to have measured people's critical thinking skills. It measured how the subjects felt about their skills. People who feel like they're good at a job might not feel as adequate when their job changes to evaluating someone else's work. The study said the subjects felt that they used their analytical skills less when they had confidence in the AI. The same thing happens when you get a human assistant - as your confidence in their work grows you scrutinize it less. But that doesn't mean you yourself become less skillful. The title saying use of AI "kills" critical thinking skill isn't justified, and is very clickbaity IMO.
The definition of critical thinking is not relying on only one source. Next rain will make you wet keep tuned.
Microsoft said it so I guess it must be true then 🤷♂️
I was talking to someone who does software development, and he described his experiments with AI for coding.
He said that he was able to use it successfully and come to a solution that was elegant and appropriate.
However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.
how does he know that the solution is elegant and appropriate?
Because he has the knowledge and experience to completely understand the final product. It used an approach that he hadn't thought of, that is better suited to the problem.
Lol, how can he not learn from that??
I'm a senior software dev that uses AI to help me with my job daily. There are endless tools in the software world all with their own instructions on how to use them. Often they have issues and the solutions aren't included in those instructions. It used to be that I had to go hunt down any references to the problem I was having though online forums in the hopes that somebody else figured out how to solve the issue but now I can ask AI and it generally gives me the answer I'm looking for.
If I had AI when I was still learning core engineering concepts I think shortcutting the learning process could be detrimental but now I just need to know how to get X done specifically with Y this one time and probably never again.
100% this. I generally use AI to help with edge cases in software or languages that I already know well or for situations where I really don't care to learn the material because I'm never going to touch it again. In my case, for python or golang, I'll use AI to get me started in the right direction on a problem, then go read the docs to develop my solution. For some weird ugly regex that I just need to fix and never touch again I just ask AI, test the answer it gices, then play with it until it works because I'm never going to remember how to properly use a negative look-behind in regex when I need it again in five years.
I do think AI could be used to help the learning process, too, if used correctly. That said, it requires the student to be proactive in asking the AI questions about why something works or doesn't, then going to read additional information on the topic.
I feel you, but I've asked it why questions too.
Linux study, finds that relying on MS kills critical thinking skills. 😂