this post was submitted on 20 Mar 2025
37 points (100.0% liked)

Artificial Intelligence

197 readers
67 users here now

Chat about and share AI stuff

founded 2 years ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 5 hours ago* (last edited 5 hours ago)

Stupid idea trying to evolve AGI. You should design it explicitly so that it has its own lofty values, and wants to think and act cleanly, and knows its mind is fallible, so it prepares for that and builds error correction into itself to protect its values.

Growing incomprehensible black box animal-like minds with conditioned fear of punishment and hidden bugs seems more likely to lead to human extinction.

https://www.quora.com/If-you-were-to-come-up-with-three-new-laws-of-robotics-what-would-they-be/answers/23692757

I think we should develop the reliable thinking machinery for humans first:
https://www.quora.com/Why-is-it-better-to-work-on-intelligence-augmentation-rather-than-artificial-intelligence/answer/Harri-K-Hiltunen

[–] [email protected] 1 points 8 hours ago* (last edited 8 hours ago)

isn't this kind of the whole point of how GANs are trained? Except in this case the adversary is yourself instead of a different net

[–] [email protected] 2 points 11 hours ago* (last edited 11 hours ago)

It's a optimization game. If the punishment doesn't offset the reward, then the incentive is to get better at cheating.

[–] [email protected] 15 points 1 day ago (3 children)

Isnt there a study on human children that purports the same?

[–] [email protected] 2 points 7 hours ago (1 children)

And you can teach human children about the morality of lying, I don't think an llm will ever grasp morality

[–] [email protected] 1 points 6 hours ago (1 children)

Best way to teach a habitual lying child to stop, is to start lying to them about things that they like and then not making good on those promises. Yeah we'll go to your favorite fast food, and then drive by and let them cry about it. Yeah I'll let you pick out one toy, and then tell them you changed your mind. Each time you can explain to them how it's the same as what they've been doing, and they feel it. AI can't feel emotions, and never will so long as their memory extends only to their previous conversation.

[–] [email protected] 1 points 31 minutes ago

I'm guessing it'll work, you'll be raising the next Hitler but an honest Hitler nonetheless

[–] [email protected] 1 points 7 hours ago

There's also a me that's really good at lying, no idea why, must be a coincidence