I mean, I argue that we aren't anywhere near AGI. Maybe we have a better chatbot and autocomplete than we did 20 years, but calling that AI? It doesn't really track, does it? With how bad they are at navigating novel situations? With how much time, energy and data it takes to eek out just a tiny bit more model fitness? Sure, these tools are pretty amazing for what they are, but general intelligences, they are not.
chaonaut
It questionable to measure these things as being reflective of AI, because what AI is changes based on what piece of tech is being hawked as AI, because we're really bad at defining what intelligence is and isn't. You want to claim LLMs as AI? Go ahead, but you also adopt the problems of LLMs as the problems of AIs. Defining AI and thus its metrics is a moving target. When we can't agree to what is is, we can't agree to what it can do.
I mean, sure, in that the expectation is that the article is talking about AI in general. The cited paper is discussing LLMs and their ability to complete tasks. So, we have to agree that LLMs are what we mean by AI, and that their ability to complete tasks is a valid metric for AI. If we accept the marketing hype, then of course LLMs are exactly what we've been talking about with AI, and we've accepted LLMs features and limitations as what AI is. If LLMs are prone to filling in with whatever closest fits the model without regard to accuracy, by accepting LLMs as what we mean by AI, then AI fits to its model without regard to accuracy.
Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn't really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that's an awful long ways off from talking about AI itself (unless we've bought into the marketing hype).
Maybe the marketers should be a bit more picky about what they slap "AI" on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that's just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.
What I expect is all the "the FDA doesn't want you to know this" grifters are really excited to have their snake oil supported by the government so they can sell their stuff better. No further thought that "we could make a lot of money doing this" and of the similar myopic thinking that cares about next quarter's warnings call more than being in business next year
No, of course you fall back from your claimed reason, you just want more bloodshed. And I doubt you particularly care whose it is.
Yeah, I already got that you really want people to hurt people with the goal of causing fear, and aren't concerned with with the fallout. How are you planning on dealing with the massive industry set up to cultivate and direct fear towards the ends of conservatives? Or is having a theory of change libshit?
Gotta love keyboard warriors calling everyone else cowards for not doing enough violence. No real theory of change, just "hurt them until they're scared of you" like conservatives don't already believe that we're killing them for dumb, made-up reasons.
Fox News has already spent decades convincing MAGA that we're out for their blood, and made them terrified of everything outside their home. What were you hoping to do that wouldn't lead conservative media coverage as examples of what they've been telling their audience to be scared of?
It really doesn't? All the reasons he's already doing it to everyone else are why he is able to do it to everyone else. What meaningful change in direction are you even trying to point to here, if you already know that Trump is deporting people?
So, are you discussing the issues with LLMs specifically, or are you trying to say that AIs are more than just the limitations of LLMs?