this post was submitted on 20 Mar 2025
41 points (100.0% liked)

Artificial Intelligence

197 readers
79 users here now

Chat about and share AI stuff

founded 2 years ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 17 hours ago

Oh so like children

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago)

Stupid idea trying to evolve AGI. You should design it explicitly so that it has its own lofty values, and wants to think and act cleanly, and knows its mind is fallible, so it prepares for that and builds error correction into itself to protect its values.

Growing incomprehensible black box animal-like minds with conditioned fear of punishment and hidden bugs seems more likely to lead to human extinction.

https://www.quora.com/If-you-were-to-come-up-with-three-new-laws-of-robotics-what-would-they-be/answers/23692757

I think we should develop the reliable thinking machinery for humans first:
https://www.quora.com/Why-is-it-better-to-work-on-intelligence-augmentation-rather-than-artificial-intelligence/answer/Harri-K-Hiltunen

[–] [email protected] 3 points 1 day ago* (last edited 1 day ago) (1 children)

It's a optimization game. If the punishment doesn't offset the reward, then the incentive is to get better at cheating.

[–] [email protected] 1 points 17 hours ago* (last edited 17 hours ago)

I've seen plenty of videos of random college kids training LLMs to play video games and getting the AI to stop cheating is like half the project. But they manage it, eventually. It's laughable that these big companies and research firms can't quite figure it out.

[–] [email protected] 16 points 1 day ago (3 children)

Isnt there a study on human children that purports the same?

[–] [email protected] 2 points 1 day ago (1 children)

And you can teach human children about the morality of lying, I don't think an llm will ever grasp morality

[–] [email protected] 1 points 1 day ago (1 children)

Best way to teach a habitual lying child to stop, is to start lying to them about things that they like and then not making good on those promises. Yeah we'll go to your favorite fast food, and then drive by and let them cry about it. Yeah I'll let you pick out one toy, and then tell them you changed your mind. Each time you can explain to them how it's the same as what they've been doing, and they feel it. AI can't feel emotions, and never will so long as their memory extends only to their previous conversation.

[–] [email protected] 2 points 20 hours ago

I'm guessing it'll work, you'll be raising the next Hitler but an honest Hitler nonetheless

[–] [email protected] 1 points 1 day ago

There's also a me that's really good at lying, no idea why, must be a coincidence

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago)

isn't this kind of the whole point of how GANs are trained? Except in this case the adversary is yourself instead of a different net