The implications of a super-intelligent AI

February 19, 2017 § 10 Comments

Imagine you are a super intelligent robot, whose main goal is to fulfill a space ship’s mission, but at the same time, keep the true purpose of the mission from the crew of the ship. These two variables create a paradox, and now you are left with a difficult decision to make. What would you do? One particular Artificially Intelligent robot decided that the best way to do this would be by killing the ship’s crew.

This familiar story of course, is of HAL, from 2001: A Space Odyssey. Being a fictional scenario, you’d think that this isn’t something we should be concerned about. But as robots become more autonomous, the notion of computers facing such ethical decisions is moving out of the realm of science fiction and is becoming a reality.

We’ve already established that AI already outperforms human intelligence in several domains, as Nick Bostrom put it in his book, “Superintelligence.” Now, the challenge that we face is understanding the implications of AI outperforming humans in all domains, before we actually create it. There are three issues we have to look at when it comes to super intelligent robots:

  1. The problems we might face with AI
  2. The ethics involved in creating AI, and how that would apply to it when we create it
  3. Possible Threats

The first thing that we must analyze is our approach and attitude towards AI. The idea of AI taking over is often thought of as “cool” and is rarely taken seriously, because we would simply categorize that as Science fiction. As philosopher and neuroscientist Sam Harris put it in a TED talk “Famine and Disease is not fun. Death by science fiction on the other hand is.” He suggests that this response would be the reason why AI would be able to take over – when the time comes, we would not be able to have an appropriate emotional response to the situation.

At the moment, many people consider the idea of super intelligence to be far-fetched and don’t think we would achieve that level of technology. However, mathematician I.J. Good states that since “the design of machines is one of these intellectual activities, an ultraintpicture1elligent machine could design even better machines,” leading to an “intelligence explosion,” at which point the process would no longer be under our control.

Many people say that the chances of AI becoming evil and attacking us are unlikely, and I agree with this. What concerns me is what would happen if we were to come in the way of an artificially intelligent robot completing a given task. In a recent interview conducted by the independent, Stephen Hawking said that the main risk with AI is not malice, but competence. AI will find the most efficient way to solve a problem, but its process might not consider any of our values or consider the ethics of its actions, destroying us in the process.

Another thing to consider is how we would deal with the rights of AI, and the ethics that are involved with that. In the case that we create an AI that is more intelligent than us, or nearing that of us, we must determine whether or not we consider them as real and conscious beings. Robot rights is something that is even today (though we are not even close to achieving that level of AI) highly debated. The question is whether or not we consider AI to be a real life form. At them moment, it seems like the only thing separating the AI consciousness from the human is perhaps lack of emotion.

Finally, no discussion on AI is complete without addressing evil AI. In today’s technologically connected world, hacking is a serious issue that everyone is vulnerable to. And hacking into AI could have potentially catastrophic outcomes. Once AI becomes more intelligent than humans and essentially becomes a higher power, there is nothing stopping it from destroying us, if we were to get in their way of completing a certain task or were to appear as a threat to it in any way.

Our scientific curiosity will always continue to push us, so the possibility of not developing such AI is in my opinion, out of the question. So as you can see, there are a wide array of issues we have to consider before AI reaches a stage where it’s smarter than us, and maybe develop a comprehensive set of guidelines to help us deal with them, when the time comes.

Advertisements

Tagged: , , , , , , ,

§ 10 Responses to The implications of a super-intelligent AI

  • elizabeth1315 says:

    You raise an interesting discussion concerning the implications of AI. Like you, I agree that it is unlikely that AI will attack humanity with evil and intentional actions. However, hacking into AI is a serious threat, even if we aren’t necessarily talking about a robot. While I’ve never thought about it in the context of a super-intelligent robot, I have talked with some people about the potential of hacking into self-driving cars. Hacking into self-driving cars would certainly have the potential to cause harm and disruption. People are aware of this problem, including Congress and the FBI, likely because we are moving closer and closer to a system of self-driving cars. Certainly, we are closer to that system than one with robots with AI. As a result, the need for a solution to the problem of cyber-security in these cars is on everyone’s radar, so multiple groups and companies are actively looking for a solution. From my understanding, we are reducing security threats through these efforts.

    I imagine, then, that as we move closer to super-intelligence, we would take similar actions. If we can solve the through self-driving cars, we may be able to implement similar precautions and measures into super-intelligent robots. Regardless, the possibility of hacking remains to be a threat and shifts the malicious intent from the robot to the human. Our interactions with robots may have far more dangerous repercussions than those that stem from the actions of the robots themselves.

    Liked by 1 person

    • tiffjku says:

      I also took to the second to last paragraph as an important discussion point. There is the question of whether it is the human influence or the AI itself that will eventually become “evil.” I think this is why it is important to make sure that there are systems and rules in place that can prevent such catastrophes caused by hackers who are looking to do harm. However, it seems that eventually someone find a way through all the rules and barriers set up. That is one of the biggest problems I see with technology. People, if their desire is enough, will always find a way. However, who is to say that the AI’s will not make the first move? Take for example Avengers 2 and I,Robot (the movie). In both instances, it is the “sentient” AI that believes humans should be eradicated to save the Earth or that humans are destroying themselves. Many films and books have adopted this plot, and what they say is somewhat true. Humans do kill and harm other humans for no reason at times. Humans have been slowly killing the Earth with toxic fumes, pollution, and an abundance of waste. I know for a fact that I contribute to this as well. However, it is interesting that the AI’s in these stories have deduced that the best option for humans then is either enslavement or extinction. Is that really the best option? Are there no other ways to help humanity as well as the Earth?

      Like

  • woodrume says:

    I really like Elizabeth’s point about self-driving cars, because I think software security is often taken for granted in the rush to update to newer, more convenient systems. However, there is certainly a flipside to this risk. Often (though certainly not always), new threats posed by technology are conceived by systems that also successfully eliminate other, more ubiquitous threats. Which threat, then, is greater: The threat of a self-driving car being hacked and causing danger to its riders, or the everyday threat of unintentional negligence and distraction from human drivers? Given the current emphasis being placed on car security, I am inclined to believe that the latter is more damaging to the whole of society. When it comes to the risks associated with technologies like voting systems and online banking, however, I become less sure.
    Our continual adoption of AI-driven systems is likely to continue introducing these trade-offs between the threat of everyday, (usually) unintentional human error and the threat of ill-intentioned human interference with technology. I think one of the most appropriate ways to be informed users/adopters of new technology is to stay educated (but not overly paranoid) on the threats that accompany each adoption. As frequent consumers of such technologies, we do have some control over their continued success and development.

    Like

  • bkcallander says:

    I thought it was particularly interesting how Bostrom defines superhuman intelligence. From a technical standpoint, he largely spoke about an AI being “superintelligent” if it could outperform a human at a specific task. You quote Stephen Hawking in talking about how the real danger in human inadequacy rather than robot maliciousness. I’m mostly curious because, most likely, AI will be created in order to accomplish human goals, which stem from biological imperatives and curiosities (for the most part). Though they might outperform humans in any given category, how would super intelligent machines find their desire to achieve greater goals? They might simply be iterating better and better methods of chasing human endeavors.

    Like

  • kbyron94 says:

    I think your point about ethics is very important as it relates to how we treat machines and how machines treat humans. However, I think there is another element of ethics involved as well. How do we make sure that the benefits or costs of machines are spread evenly throughout society and not concentrated among certain groups? If machines are able to replace lower-level jobs that we have today, we will be leaving groups of people without an income or occupation. Ethically, it seems, we would need to provide some sort of training or opportunity for these people to move into other professions. This is already happening with automated calls, automated ordering at fast-food restaurants, or self-checkout machines. Many companies will argue for the economic benefits of machines, which may be able to produce results that require hours of paid labor from humans. Yet, the consequences of these choices are severe for those that are replaced. Therefore, beyond the ethical considerations of machine rights or machine interactions with humans, there is a third type of ethical consideration: how humans treat and value each other with the emergence of this new kind of technology.

    Like

  • Emmie Kline says:

    Your comment about the ethics we should adopt regarding AI reminded me of something Bostrom wrote in the Preface to Superintelligence. On page V, he wrote, “If some day we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. And, as the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence.” I found this observation from Bostrom to be incredibly profound and true; if we create a machine with superintelligence, that machine will create an even more intelligent machine shortly after. This process could potentially continue exponentially, until it reaches some sort of limit, if such limit exists. I think, at this point, ethics are out of our control, much like gorillas are not included in human ethical decision-making regarding gorillas’ well being. Once we reach the point of above-human intelligence, like many others such as Turing have argued previously, many decisions may be out of our hands, instead left to infinitely more intelligent computers.
    Additionally, your post reminded me of one of Asimov’s early science fiction stories, “The Evitable Conflict.” In it, the Coordinator travels to the world’s different regions to investigate slight irregularities in production. He thinks the computers running the world might be malfunctioning, when in reality the machines are controlling the situation at hand to reduce threats and maintain order. If we reach the point of superhuman intelligence, I think this scenario becomes very real. To me, that is certainly very strange to admit becomes I considered the notion very far-fetched and unrealistic when reading this short story at the beginning of the semester. While I am not entirely convinced that it is inevitable, I now see and understand its possibility and the gravity of that reality.

    Like

    • maplesmm says:

      Emmie, that is a great parallel. I thought Bostrom’s point about humans’ almost neglect of the gorilla’s well-being was very provoking, but I was a little taken aback by his solution to facing a similar problem with AI. He suggested that we program our first generations of AI to protect human values, but I wonder what values those will be? Within the US, just about every individual has a different idea as to what their rights, the government’s rights, animal rights, religious rights (the list goes on and on) should be. And when the frame of view is extended to an international perspective, what happens then? Do we program AI that takes into consideration the differences in cultural norms? Or does it act as a great equalizer, placing all of humanity on one common ground – a slightly (or not so slightly) lower ground than the AI itself? I don’t know what the future will look like, but I am pretty certain it will involve AI in various forms, and I am very excited to see where that takes humanity.

      Like

  • Your point about the possibility of an AI entity being hacked raised several questions for me which we, as a society, would likely need to find answers to before we are able to confidently proceed with the pursuit of AI with superhuman intelligence. First, to what extent should we firewall the source code for AI from outsiders? The possibility of a malicious human hacker manipulating AI for nefarious purposes is frightening. However, on the other hand, a super-intelligent AI which is too isolated may make decisions which would harm us, with our having little ability to stop it. I point to the example of the trading computers in Bostrom’s book which caused the 2010 flash crash. While errors in their programming compounded and interacted with each other, causing a significant loss in wealth, a fail-safe was triggered, halting their actions. This seems to be one of the major advantages of AI: that it can be shut off if needed. Therefore, it seems to me that while some level of protection from outsiders must be required, a port of entry for a fail-safe would be wise. Perhaps this fail-safe could cease the operations of the AI if it notices the presence of a hacker as well.

    On a related note, to what extent should the AI be isolated from its own source code? Often, computer programs “learn” by making their own code more efficient through the use of metaprogamming. However, we may not be happy with the changes a superintendent AI would make to itself. It may decide that the fail-safe is unnecessary, or alter itself to a point where we no longer understand it. Imagine having the ability to not only rewire your brain, but also recode your DNA. It stands to reason that a truly super-intelligent AI could easily become better at programming that we are, thereby isolating itself and altering itself to a dangerous extent.

    I’m not sure who would decide the answers to these questions, but it seems to me that we should create structures and procedures to solve these thorny dilemmas, if such entities do not exist already.

    Liked by 1 person

    • Alison Maas says:

      Your point that computers “learning” to make their own code more efficient is poignant in discussing the dangers of AI and harks back to the example of “2001: A Space Odyssey.” Bostrom in his book gives the example of machines that are able to not only outperform humans, but also able to create other machines better than humans could. Still, what we keep circling, is that the safety factor would be human intervention. As you point out, AI can be “shut off if needed.” Yet, to what degree does this act of precaution limit the extent of curious exploration. Again, we have been circling this typical dilemma of Dedalus and Icarus. Yes, if you soar too close to the sun you could plummet to your death but if you never try, how will you know where that limit exists? As you state, this act of self-programming could mean a dangerous outcome for humans. Still, it could also mean the creation of even more efficient programs.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

What’s this?

You are currently reading The implications of a super-intelligent AI at Science/Fiction.

meta

%d bloggers like this: