this post was submitted on 23 Jan 2025
86 points (100.0% liked)

Fuck AI

1767 readers
183 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 10 months ago
MODERATORS
 

The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.

Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 1 week ago (3 children)

Why am I not surprised? People who know nothing about these things think we just created a brain simulation: they think it's magic! While those who are tech-savvy know just what these things can and can't do and know just how unreliable they can be.

[–] [email protected] 3 points 1 week ago (2 children)

just how unreliable they can be.

Even if we somehow manage to make AI 100% accurate, it won't actually be factual. AI will never be factual.

If you think about what an LLM actually is, its basically not more than someone making a tar file, which just takes a lot of time, energy, and user input in order to untar again. But it still depends on the maker of the tar file what they will put in it. For example, a zuckerberg will put other data in it than an LLM made by Bernie sanders. Therefore, the LLM will always output data similar to the views of the person who made it, be it political or other. Therefore, you would need to use every AI there is in order to see a truly factual answer.

So, TL;DR, Even if you use an LLM, you still need to use every LLM there is in order to get an at least close to factual answer. Therefore, you are not better off than just using SearXNG with a good adblock and blocking the search results of all the clickbait AI generated slop sites.

[–] [email protected] 4 points 1 week ago (1 children)

Yup. These "AI" machines are not much more than glorified pattern recognition software. They are hallucination machines that sometimes get things right by accident.

Comparing them to .tar or .zip files is an interesting way of thinking about how the "training process" is nothing more than adjusting the machine sot that it copies the training data (backwards propagation). Since training works is such a way that the machine's definition of success is how well if copies the training data:

  • If the output is similar to the training data, then it is a success
  • If the output is different for the training data, it is a failure
[–] [email protected] 2 points 1 week ago

Comparing them to .tar or .zip

Dont give me the credit, I just once saw a Video about how you could theoretically use an llm as a compression algorithm for password (or in this case prompt) protected files. Like, if you make that work, you can literally retcon someone (like the Feds) about cracking your file.