this post was submitted on 22 Jun 2025
785 points (94.7% liked)

Technology

71922 readers
6485 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

(page 4) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 4 days ago (1 children)

"Adding missing information" Like... From where?

[–] [email protected] 4 points 4 days ago

Computer... enhance!

[–] [email protected] 2 points 3 days ago

By the way, when you refuse to band together, organize, and dispose of these people, they entrench themselves further in power. Everyone ignored Kari Lake as a harmless kook and she just destroyed Voice of America. That loudmouthed MAGA asshole in your neighborhood is going to commit a murder.

[–] MangioneDontMiss 3 points 4 days ago

i'll allow it so long as grok acknowledges that musk was made rich from inherited wealth that was created using an apartheid emerald mine.

[–] [email protected] 4 points 4 days ago

He’s done with Tesla, isn’t he?

[–] [email protected] 3 points 4 days ago* (last edited 4 days ago)

I think most AI corp tech bros do want to control information, they just aren't high enough on Ket to say it out loud.

[–] [email protected] 3 points 4 days ago

Faek news!

What a dickbag. I'll never forgive him for bastardizing one of my favorite works of fiction (Stranger in a Strange Land)

[–] [email protected] 3 points 4 days ago

I've seen what happens when image generating AI trains on AI art and I can't wait to see the same thing for "knowledge"

[–] [email protected] 3 points 4 days ago (2 children)

I'm interested to see how this turns out. My prediction is that the AI trained from the results will be insane, in the unable-to-reason-effectively sense, because we don't yet have AIs capable of rewriting all that knowledge and keeping it consistent. Each little bit of it considered in isolation will fit the criteria that Musk provides, but taken as a whole it'll be a giant mess of contradictions.

Sure, the existing corpus of knowledge doesn't all say the same thing either, but the contradictions in it can be identified with deeper consistent patterns. An AI trained off of Reddit will learn drastically different outlooks and information from /r/conservative comments than it would from /r/news comments, but the fact that those are two identifiable communities means that it'd see a higher order consistency to this. If anything that'll help it understand that there are different views in the world.

[–] [email protected] 1 points 4 days ago

in the unable-to-reason-effectively sense

That's all LLMs by definition.

They're probabilistic text generators, not AI. They're fundamentally incapable of reasoning in any way, shape or form.

They just take a text and produce the most probable word to follow it according to their training model, that's all.

What Musk's plan (using an LLM to regurgitate as much of its model as it can, expunging all references to Musk being a pedophile and whatnot from the resulting garbage, adding some racism and disinformation for good measure, and training a new model exclusively on that slop) will produce is a significantly more limited and prone to hallucinations model that occasionally spews racism and disinformation.

load more comments (1 replies)
[–] [email protected] 2 points 4 days ago
[–] [email protected] 2 points 4 days ago

Solve physics and kill god

[–] [email protected] 1 points 4 days ago

You want to have a non-final product write the training for the next level of bot? Sure, makes sense if you're stupid. Why did all these companies waste time stealing when they could just have one bot make data for the next bot to train on? Infinite data!

[–] [email protected] 1 points 4 days ago

I believe it won't work.

They would have to change so much info that won't make a coherent whole. So many alternative facts that clash with so many other aspects of life. So asking about any of it would cause errors because of the many conflicts.

Sure it might work for a bit, but it would quickly degrade and will be so much slower than other models since it needs to error correct constantly.

An other thing is that their training data will also be very limited, and they would have to check every single other one thoroughly for "false info". Increasing their manual labour.

[–] [email protected] 1 points 4 days ago

Grok will round up physics constants and pi as well... nothing will work but Musk will say that humanity is dumb

[–] [email protected] 1 points 4 days ago

I see Mr. Musk has started using intracerebrally.

load more comments
view more: ‹ prev next ›