Stupid idea trying to evolve AGI. You should design it explicitly so that it has its own lofty values, and wants to think and act cleanly, and knows its mind is fallible, so it prepares for that and builds error correction into itself to protect its values.
Growing incomprehensible black box animal-like minds with conditioned fear of punishment and hidden bugs seems more likely to lead to human extinction.
I think we should develop the reliable thinking machinery for humans first:
https://www.quora.com/Why-is-it-better-to-work-on-intelligence-augmentation-rather-than-artificial-intelligence/answer/Harri-K-Hiltunen