this post was submitted on 15 Apr 2025
1480 points (95.9% liked)
memes
14306 readers
5516 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to [email protected]
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- [email protected] : Star Trek memes, chat and shitposts
- [email protected] : Lemmy Shitposts, anything and everything goes.
- [email protected] : Linux themed memes
- [email protected] : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI and NFT are not even close. Almost every person I know uses AI, and nobody I know used NFT even once. NFT was a marginal thing compared to AI today.
Every NFT denial:
"They'll be useful for something soon!"
Every AI denial:
"Well then you must be a bad programmer."
I am one of the biggest critics of AI, but yeah, it's NOT going anywhere.
The toothpaste is out, and every nation on Earth is scrambling to get the best, smartest, most capable systems in their hands. We're in the middle of an actual arms-race here and the general public is too caught up on the question of if a realistic rendering of Lola Bunny in lingerie is considered "real art."
The Chat GTP/LLM shit that we're swimming in is just the surface-level annoying marketing for what may be our last invention as a species.
I have some normies who asked me to to break down what NFTs were and how they worked. These same people might not understand how "AI" works, (they do not), but they understand that it produces pictures and writings.
Generative AI has applications for all the paperwork I have to do. Honestly if they focused on that, they could make my shit more efficient. A lot of the reports I file are very similar month in and month out, with lots of specific, technical language (Patient care). When I was an EMT, many of our reports were for IFTs, and those were literally copy pasted (especially when maybe 90 to 100 percent of a Basic's call volume was taking people to and from dialysis.)
Holy shit, then you definitely can't use an LLM because it will just "hallucinate" medical information.
If you were part of Starbucks loyalty scheme then you used NFTs.
So how did that turn out today?
Are they still using NFT or did they switch over to something sensible?
"AI" doesn't exist. Nobody that you know is actually using "AI". It's not even close to being a real thing.
We've been productively using AI for decades now – just not the AI you think of when you hear the term. Fuzzy logic, expert systems, basic automatic translation... Those are all things that were researched as artificial intelligence. We've been using neural nets (aka the current hotness) to recognize hand-written zip codes since the 90s.
Of course that's an expert definition of artificial intelligence. You might expect something different. But saying that AI isn't AI unless it's sentient is like saying that space travel doesn't count if it doesn't go faster than light. It'd be cool if we had that but the steps we're actually taking are significant.
Even if the current wave of AI is massively overhyped, as usual.
The issue is AI is a buzz word to move product. The ones working on it call it an LLM, the one seeking buy-ins call it AI.
Wile labels change, its not great to dilute meaning because a corpo wants to sell some thing but wants a free ride on the collective zeitgeist. Hover boards went from a gravity defying skate board to a rebranded Segway without the handle that would burst into flames. But Segway 2.0 didn’t focus test with the kids well and here we are.
The people working on LLMs also call it AI. Just that LLMs are a small subset in the AI research area. That is every LLM is AI but not every AI is an LLM.
Just look at the conference names the research is published in.
Maybe, still doesn’t mean that the label AI was ever warranted, nor that the ones who chose it had a product to sell. The point still stands. These systems do not display intelligence any more than a Rube Goldberg machine is a thinking agent.
Well now you need to define "intelligence" and that's wandering into some thick philosophical weeds. The fact is that the term "artificial intelligence" is as old as computing itself. Go read up on Alan Turing's work.
Does “AI” have agency?
That's just kicking the can down the road, because now you have to define agency. Do you have agency? If you didn't, would you even know? Can you prove it either way? In any case, this is no longer a scientific discussion, but a philosophical one, because whether or not an entity has "intelligence" or "agency" are not testable questions.
We have functional agency regardless of your stance on determinism in the same way that computers can obtain functional randomness when they are unable to generate a true random number. Artificial intelligence requires agency and spontaneity, and these are the lowest bars it must pass. And they do not pass these and the current path of their development can not pass these, no matter how updated their training set, or how bespoke their weights are.
these large models do not have “true” concepts over what they provide in the same way a book does not have a concept of the material they contain, no matter how fancy the index is
Is this scientifically provable? I don't see how this isn't a subjective statement.
Says who? Hollywood? For almost a hundred years the term has been used by computer scientists to describe computers using "fuzzy logic" and "learning programs" to solve problems that are too complicated for traditional data structures and algorithms to reasonably tackle, and it's really a very general and fluid field of computer science, as old as computer science itself. See the Wikipedia page
And finally, there is no special sauce to animal intelligence. There's no such thing as a soul. You yourself are a Rube Goldberg machine of chemistry and electricity, your only "concepts" obtained through your dozens of senses constantly collecting data 24/7 since embryo. Not that the intelligence of today's LLMs are comparable to ours, but there's no magic to us, we're Rube Goldberg machines too.
“Functional” was the conditional that acknowledges the possibility of a totally deterministic existence, but dismisses it for what ever we actually perceive as agency, as to argue one way or the other is a distraction away from the topic and is wholly unnecessary.
Also: “However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's [not labeled AI anymore]” -wikipedia
This should tell you that the term AI is commonly, improperly used to refer to computer actions when not properly understood. AI was coined by science fiction to do what science fiction does best, force humanity to question, and in this case the question what is consciousness. That is to say, a consciousness that was designed, and not self built out of the muck. If you argue that how its used determines its meaning, then fine everything from punchcard looms, video game bosses, to excel spread sheets are or have AI. And its designation becomes worthless. Once the magic fades these LLM’s will be as much an artificial intelligence as siri.
Hucksters sell magic, scientists and engineers provide solutions.
And finally i agree there is nothing “special” but there is a difference between large models and consciousness. If you leave an LLM open, and left alone, how long before it starts to create something, or does anything? You leave an animal or a human in a blank room long enough it will do something not related to direct survival.
It took someone to literally create a picture of a full wine glass in order for an “art” AI to take and generate one. This should tell you these do not have functioning concept of the subject matter. But are good enough at convincing people they do.
It's still an unsettled question if we even do
We have functional agency, regardless of your stance on the determinism. “AI” does not even reach that bar, and so far has no pathways to reach that with its current direction. Though that might be by design. But whether humanity wants an actual AI is a different discussion entirely. Either way these large models are not AI, they are just sold as such to make them seem more than they actually are.
Not to go way offtop here but this reminds me: Palm's "Graffiti" handwriting recognition was a REALLY good input method back when I used it. I bet it did something similar.
AI is a standard term that is used widely in the industry. Get over it.
I don't really care what anyone wants to call it anymore, people who make this correction are usually pretty firmly against the idea of it even being a thing, but again, it doesn't matter what anyone thinks about it or what we call it, because the race is still happening whether we like it or not.
If you're annoyed with the sea of LLM content and generated "art" and the tired way people are abusing ChatGTP, welcome to the club. Most of us are.
But that doesn't mean that every major nation and corporation in the world isn't still scrambling to claim the most powerful, most intelligent machines they can produce, because everyone knows that this technology is here to stay and it's only going to keep getting worked on. I have no idea where it's going or what it will become, but the toothpaste is out and there's no putting it back.
While i grew up with the original definition as well the term AI has changed over the years. What we used to call AI is now what's referred to as AGI. There are several steps still to break through before we get the AI of the past. Here is a statement made by AI about the subject.
The Spectrum Between AI and AGI:
Narrow AI (ANI):
This is the current state of AI, which focuses on specific tasks and applications.
General AI (AGI):
This is the theoretical goal of AI, aiming to create systems with human-level intelligence.
Superintelligence (ASI):
This is a hypothetical level of AI that surpasses human intelligence, capable of tasks beyond human comprehension.
In essence, AGI represents a significant leap forward in AI development, moving from task-specific AI to a system with broad, human-like intelligence. While AI is currently used in various applications, AGI remains a research goal with the potential to revolutionize many aspects of life.
If you say a thing like that without defining what you mean by AI, when CLEARLY it is different than how it was being used in the parent comment and the rest of this thread, you're just being pretentious.