Until now, AI has been applied fairly narrowly in limited fields like language translation or strategy games. In contrast, the holy grail of AI research is the production of AGI that would operate at a human level of intelligence.
But what would happen if this holy grail were found?
For starters, the creation of AGI might result in what’s known to AI researchers as an intelligence explosion.
An intelligence explosion is a process by which an intelligent machine gains superintelligence, a level of intelligence far above human capability.
It would achieve this through rapid learning and recursive self-improvement because an AGI could potentially design an even more intelligent machine, which could design an even more intelligent machine and so on. This could trigger an intelligence explosion that would allow machines to surpass human intelligence.
What’s more, superintelligent machines could take over the world and cause us harm, no matter how good our intentions.
Let’s say, for example, that humans program a superintelligence that is concerned with the welfare of humankind. From the superintelligence’s perspective, this would probably be akin to a bunch of kindergartners far beneath your intelligence holding you in bondage for their own benefit.
Quite probably you would find this a depressing and inefficient situation and take matters into your own hands. And what do you do with incompetent, annoying human obstacles? Control them, or better yet, destroy them.