this post was submitted on 27 Feb 2025
184 points (99.5% liked)

Technology

63313 readers
4910 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 19 hours ago* (last edited 18 hours ago) (6 children)

"We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.

They should accept that somebody has to find the explanation.

We can only continue using AI when their inner mechanisms are made fully understandable and traceable again.

Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.

[–] [email protected] 11 points 19 hours ago (1 children)

Most of current LLM's are black boxes. Not even their own creators are fully aware of their inner workings. Which is a great recipe for disaster further down the line.

[–] [email protected] 8 points 18 hours ago (1 children)

'it gained self awareness.'

'How?'

shrug

[–] [email protected] 2 points 18 hours ago

I feel like this is a Monty Python skit in the making.

[–] [email protected] 5 points 18 hours ago (1 children)

A comment that says "I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway."

[–] [email protected] -5 points 18 hours ago

I have known it very well for only about 40 years. How about you?

[–] [email protected] 3 points 17 hours ago

And yet they provide a perfectly reasonable explanation:

If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web.

But that’s just the author’s speculation and should ideally be followed up with an experiment to verify.

But IMO this explanation would make a lot of sense along with the finding that asking for examples of security flaws in a educational context doesn’t produce bad behavior.

[–] floofloof 2 points 16 hours ago* (last edited 16 hours ago)

Yes, it means that their basic architecture must be heavily refactored.

Does it though? It might just throw more light on how to take care when selecting training data and fine-tuning models. Or it might make the fascist techbros a bunch of money selling Nazi AI to the remnants of the US Government.

[–] [email protected] 1 points 18 hours ago

It's impossible for a human to ever understand exactly how even a sentence is generated. It's an unfathomable amount of math. What we can do is observe the output and create and test hypotheses.

[–] [email protected] -2 points 18 hours ago* (last edited 18 hours ago) (3 children)

Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end

a dead end.

That is simply verifiably false and absurd to claim.

Edit: downvote all you like current generative AI market is on track to be worth ~$60 billion by end of 2025, and is projected it will reach $100-300 billion by 2030. Dead end indeed.

[–] [email protected] 2 points 13 hours ago

What's the billable market cap on which services exactly?

How will there be enough revenue to justify a 60 billion evaluation?

[–] [email protected] -2 points 17 hours ago (1 children)

ever heard of hype trains, fomo and bubbles?

[–] [email protected] -1 points 17 hours ago (1 children)

Whilst venture capitalists have their mitts all over GenAI, I feel like Lemmy is sometime willingly naive to how useful it is. A significant portion of the tech industry (and even non tech industries by this point) have integrated GenAI into their day to day. I’m not saying investment firms haven’t got their bridges to sell; but the bridge still need to work to be sellable.

[–] [email protected] -2 points 16 hours ago (1 children)

again: hype train, fomo, bubble.

[–] [email protected] 3 points 16 hours ago* (last edited 16 hours ago) (2 children)

So no tech that blows up on the market is useful? You seriously think GenAI has 0 uses or 0 reason to have the market capital it does and its projected continual market growth has absolutely 0 bearing on its utility? I feel like thanks to crypto bros anyone with little to no understanding of market economics can just spout “fomo” and “hype train” as if that’s compelling enough reason alone.

The explosion of research into AI? It’s use for education? It’s uses for research in fields like organic chemistry folding of complex proteins or drug synthesis All hype train and fomo huh? Again: naive.

[–] [email protected] 1 points 13 hours ago (1 children)

Is the market cap on speculative chemical analysis that many billions?

[–] [email protected] 1 points 7 hours ago (1 children)

Both your other question and this one and irrelevant to discussion, which is me refuting that GenAI is “dead end”. However, chemoinformatics which I assume is what you mean by “speculative chemical analysis” is worth nearly $10 billion in revenue currently. Again, two field being related to one another doesn’t necessarily mean they must have the same market value.

[–] [email protected] 1 points 3 hours ago

Right, and what percentage of their expenditures is software tooling?

Who's paying for this shit? Anybody? Who's selling it without a loss? Anybody?

[–] [email protected] -1 points 15 hours ago (1 children)

just because it is used for stuff, doesn't mean it should be used for stuff. example: certain ai companies prohibit applicants from using ai when applying.

Lots of things have had tons of money poured into them only to end up worthless once the hype ended. Remember nfts? remember the metaverse? String theory has never made a testable prediction either, but a lot of physicists have wasted a ton of time on it.

[–] [email protected] 1 points 15 hours ago* (last edited 7 hours ago)

just because it is used for stuff, doesn't mean it should be used for stuff

??? What sort of logic is this? It’s also never been a matter of whether it should be used. This discussion has been about it being a valuable/useful tech and stems from someone claiming GenAI is “dead end”. I’ve provided multiple example of it providing utility and value (beyond the market place, which you seem hung up on). Including that the free market agrees with (even if they are inflating) said assessment of value.

example: certain ai companies prohibit applicants from using ai when applying

Keyword: some. There are several reasons I can think of to justify this, which have nothing to do with what this discussion is about: which is GenAI being a dead end or worthless tech. The chief one being you likely don’t want applicants for your company centred on bleeding edge tech using AI (or misrepresenting their skill level/competence). Which if anything further highlights GenAIs utility???

Lots of things have had tons of money poured into them only to end up worthless once the hype ended. Remember nfts? remember the metaverse?

I’ll reiterate that I have provided real examples outside of market value of GenAI use/value as a technology. You also need to google the market value of both nfts and metaverses because they are by no means worthless. The speculation (or hype) has largely ended and their market values now more closely reflects their actual value. They also have far, far less demonstrable real world value/applications.

String theory has never made a testable prediction either, but a lot of physicists have wasted a ton of time on it.

??? How is this even a relevant point or example in your mind? GenAI is not theoretical. Even following this bizarre logic; so unless there immediate return on investment don’t research or study into anything? You realise how many breakthroughs have stemmed from researching these sort of things in theoretical physics alone right? Which is entirely different discussion. Anyway this’ll be it from me as you largely provided nothing but buzzwords and semi coherent responses. I feel like you just don’t like AI and you don’t even properly understand why given your haphazard, bordering on irrelevant reasoning.

[–] [email protected] -1 points 17 hours ago (1 children)

current generative AI market is

How very nice.
How's the cocaine market?

[–] [email protected] 2 points 17 hours ago* (last edited 17 hours ago)

Wow, such a compelling argument.

If the rapid progress over the past 5 or so years isn’t enough (consumer grade GPU used to generate double digit tokens per minute at best), it’s wide spread adoption and market capture isn’t enough, what is?

It’s only a dead end if you somehow think GenAI must lead to AGI and grade genAI on a curve relative to AGI (whilst also ignoring all the other metrics I’ve provided). Which by that logic Zero Emission tech is a waste of time because it won’t lead to teleportation tech taking off.