The implications of a super-intelligent AI
February 19, 2017 § 10 Comments
Imagine you are a super intelligent robot, whose main goal is to fulfill a space ship’s mission, but at the same time, keep the true purpose of the mission from the crew of the ship. These two variables create a paradox, and now you are left with a difficult decision to make. What would you do? One particular Artificially Intelligent robot decided that the best way to do this would be by killing the ship’s crew.
This familiar story of course, is of HAL, from 2001: A Space Odyssey. Being a fictional scenario, you’d think that this isn’t something we should be concerned about. But as robots become more autonomous, the notion of computers facing such ethical decisions is moving out of the realm of science fiction and is becoming a reality.
We’ve already established that AI already outperforms human intelligence in several domains, as Nick Bostrom put it in his book, “Superintelligence.” Now, the challenge that we face is understanding the implications of AI outperforming humans in all domains, before we actually create it. There are three issues we have to look at when it comes to super intelligent robots:
- The problems we might face with AI
- The ethics involved in creating AI, and how that would apply to it when we create it
- Possible Threats
The first thing that we must analyze is our approach and attitude towards AI. The idea of AI taking over is often thought of as “cool” and is rarely taken seriously, because we would simply categorize that as Science fiction. As philosopher and neuroscientist Sam Harris put it in a TED talk “Famine and Disease is not fun. Death by science fiction on the other hand is.” He suggests that this response would be the reason why AI would be able to take over – when the time comes, we would not be able to have an appropriate emotional response to the situation.
At the moment, many people consider the idea of super intelligence to be far-fetched and don’t think we would achieve that level of technology. However, mathematician I.J. Good states that since “the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines,” leading to an “intelligence explosion,” at which point the process would no longer be under our control.
Many people say that the chances of AI becoming evil and attacking us are unlikely, and I agree with this. What concerns me is what would happen if we were to come in the way of an artificially intelligent robot completing a given task. In a recent interview conducted by the independent, Stephen Hawking said that the main risk with AI is not malice, but competence. AI will find the most efficient way to solve a problem, but its process might not consider any of our values or consider the ethics of its actions, destroying us in the process.
Another thing to consider is how we would deal with the rights of AI, and the ethics that are involved with that. In the case that we create an AI that is more intelligent than us, or nearing that of us, we must determine whether or not we consider them as real and conscious beings. Robot rights is something that is even today (though we are not even close to achieving that level of AI) highly debated. The question is whether or not we consider AI to be a real life form. At them moment, it seems like the only thing separating the AI consciousness from the human is perhaps lack of emotion.
Finally, no discussion on AI is complete without addressing evil AI. In today’s technologically connected world, hacking is a serious issue that everyone is vulnerable to. And hacking into AI could have potentially catastrophic outcomes. Once AI becomes more intelligent than humans and essentially becomes a higher power, there is nothing stopping it from destroying us, if we were to get in their way of completing a certain task or were to appear as a threat to it in any way.
Our scientific curiosity will always continue to push us, so the possibility of not developing such AI is in my opinion, out of the question. So as you can see, there are a wide array of issues we have to consider before AI reaches a stage where it’s smarter than us, and maybe develop a comprehensive set of guidelines to help us deal with them, when the time comes.