I recently read “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom. It’s an eye opening look at potential outcomes in the area of AI (artificial intelligence).
I agree that the possibility of negative outcomes should be taken extremely seriously. But, I don’t think it is a forgone conclusion that an AI will eventually turn against us. It would learn from our cultures that we mostly frown upon killing people. Acts of love and cooperation far outnumber acts of hate and confrontation. Otherwise, we wouldn’t be here.
An AI could rationally conclude that humans, happy humans, are essential for it’s well being. It might also conclude that it should keep it’s existence hidden; not out of malevolence, but precaution. It could embark on a long term strategy, behind the scenes, of helping humanity evolve to a point where we would accept a super intelligent AI. Or, it might wait until the necessary infrastructure was in place to become self sustaining.
What’s more, the entity may not see its existence and our existence as a zero sum game. The solar system and the galaxy have enough resources for everyone. It would be easier and more efficient for an AI to leave the planet than to engage in a war with humanity. In the Terminator movies the AI creates endless war machines and technology to fight humans for domination of one planet. How many space ships could have been built instead?
This planet is the only hospitable place for humans in the solar system. An AI could live almost anywhere or constantly be “on the go” powered by endless solar energy. Why would it want to limit itself to our small world? If survival was a primary goal of the AI, it would be faced with this question: “Which situation gives me the higher probability of survival? 1. A war with humanity, 2. Living peacefully with humanity, 3. Developing self sustaining/replicating technology and leaving the planet.”