this post was submitted on 13 Feb 2025
1004 points (97.5% liked)
Technology
62401 readers
3935 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sounds a bit bogus to call this a causation. Much more likely that people who are more gullible in general also believe AI whatever it says.
This isn't a profound extrapolation. It's akin to saying "Kids who cheat on the exam do worse in practical skills tests than those that read the material and did the homework." Or "kids who watch TV lack the reading skills of kids who read books".
Asking something else to do your mental labor for you means never developing your brain muscle to do the work on its own. By contrast, regularly exercising the brain muscle yields better long term mental fitness and intuitive skills.
This isn't predicated on the gullibility of the practitioner. The lack of mental exercise produces gullibility.
Its just not something particular to AI. If you use any kind of 3rd party analysis in lieu of personal interrogation, you're going to suffer in your capacity for future inquiry.
All tools can be abused tbh. Before chatgpt was a thing, we called those programmers the StackOverflow kids, copy the first answer and hope for the best memes.
After searching for a solution a bit and not finding jack shit, asking a llm about some specific API thing or simple implementation example so you can extrapolate it into your complex code and confirm what it does reading the docs, both enriches the mind and you learn new techniques for the future.
Good programmers do what I described, bad programmers copy and run without reading. It's just like SO kids.
Seriously, ask AI about anything you have expert knowledge in. It's laughable sometimes... However you need to know, to know it's wrong. At face value, if you have no expertise it sounds entirely plausible, however the details can be shockingly incorrect. Do not trust it implicitly about anything.