this post was submitted on 20 Mar 2025
41 points (100.0% liked)

Artificial Intelligence

197 readers
34 users here now

Chat about and share AI stuff

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 day ago* (last edited 23 hours ago)

Stupid idea trying to evolve AGI. You should design it explicitly so that it has its own lofty values, and wants to think and act cleanly, and knows its mind is fallible, so it prepares for that and builds error correction into itself to protect its values.

Growing incomprehensible black box animal-like minds with conditioned fear of punishment and hidden bugs seems more likely to lead to human extinction.

https://www.quora.com/If-you-were-to-come-up-with-three-new-laws-of-robotics-what-would-they-be/answers/23692757

I think we should develop the reliable thinking machinery for humans first:
https://www.quora.com/Why-is-it-better-to-work-on-intelligence-augmentation-rather-than-artificial-intelligence/answer/Harri-K-Hiltunen