Home » Artificial Intelligence » Coming Soon: Superintelligence

Coming Soon: Superintelligence

Robots That Are Smarter Than People?

In Daniel Wilson’s book, Robopocalypse, Archos is the supremely intelligent sentient being unwittingly unleashed upon the world. He is self-aware, self- improving and bent on his own self-preservation and destroying humankind.

The resulting story is a snapshot of artificial intelligence at its worst. While Wilson’s book is merely fantasy now, as a PhD card-carrying roboticist himself, Wilson knows that the possibility of a super intelligent being is not just an idea spawned in a lab, but a real possibility in our world today.

As a society, we all experience the multiple, daily uses of artificial intelligence. Our highways, electrical grid and manufacturing systems all utilize some form of AI. Apple’s Siri and Google Maps get us where we want to go and find us that perfect restaurant. The global financial market is so dependent upon artificial intelligence that it barely relies upon its floor traders anymore. The military, utilizing technologies developed in part through DARPA (Defense Advanced Research Projects), are now more able to fight from a distance by sending in drones and someday may even be able to rely on mechanized fighters. Autonomous cars may soon dominate our streets. And as we become more and more reliant on these uses of artificial intelligence, you can bet that they are not only being improved upon in our labs, but that newer and more tech-reliant methods are being developed.

We know that Moore’s Law has dominated for years the rapid development of technology. In fact, David Chalmers, in his paper, The Singularity: A Philosophical Analysis, shares Machine Intelligence Research Institute’s Fellow, Eliezer Yudkowsky’s, version of the time scale that was initially set out by AI Researcher Ray Solomonoff, “Computing speeds double every two subjective years of work. Two years after Artificial Intelligences reach human equivalence, their speed doubles. One year later, their speed doubles again.  Six months- three months-1.5 months… Singularity.” (Chalmers, 2010)

We can expect, as Chalmers says, the speed explosion and the intelligence explosion, while logically independent of each other, conceivably happening at the same time. Of course, we have seen other revolutions, the Agricultural Revolution then the Industrial Revolution, both of which changed society as dramatically as the current Technology Revolution, unfold gradually. But the Technology Revolution has rapidly progressed over the last fifty years. In his book, SuperIntelligence: Paths, Dangers and Strategies, Nick Bostrum predicts that if we continue at the present rate of growth, “the world will be some 4.8 times richer by 2050 and about 34 times richer by 2100 than it is today.” This type of growth, Bostrum says, we now presume to be “ordinary.” (Bostrum, 2014)

This idea of a technological singularity was first coined by science fiction writer and mathematician, Vernor Vinge, later popularized when Futurist Ray Kurzweil championed it. Singularity on its own assumes that machines have reached the point where they have become as smart as man. But contained within this
idea of Singularity is the idea of a rapid technological explosion, often called an intelligence explosion. Much like the idea of “Big Bang,” in physics, embedded in this idea of rapid expansion is the idea that machines will exceed the intelligence of man, with the ability to become smarter, perhaps more self- aware and self-improving. It is something Bostrum calls “Superintelligence”, which he defines as, “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” (Bostrum, 2014). I.J. Good, Alan Turing’s chief statistician at Bletchley Park, sums it up best, “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever…an ultra-intelligent machine could design even better machines, there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” (Bostrum, 2014)

Sounds pretty terrifying right? So terrifying that prominent thinkers like Stephen Hawking are concerned, and more recently, Elon Musk of Tesla and Space X tweeted that “artificial intelligence might be more dangerous than nukes.” (Augenbraun, 2014)

But why exactly does superintelligence have this perception of high risk? Mostly, because of the high degree of uncertainty that surrounds it, says Dr. Stuart Armstrong, Research Fellow, Future of Humanity Institute, University of Oxford and author of Smarter than Us: The Rise of Machine Intelligence. “Of all the risks we consider, AI has the highest uncertainties by far,” said Armstrong. “We don’t know what timelines we’re talking about, we don’t know what form AI could take, and we don’t know whether consciousness will or won’t be necessary,”

But whether or not AI is as risky as nuclear war, pandemics or other risks, Armstrong stated, “AI has a unique risk profile: all other risks could wipe out a large proportion of humanity (90% or more), but only AI allows for total extinction. This is because all other risks are somewhat self-limiting (in a nuclear winter, not all the planet will be uninhabitable, pathogens are harder to spread as the population drops, etc.) while a hypothetical dangerous AI would find it easier to remove remaining humans as there are less of us.” So, while nuclear war may cause more mass deaths at a single time, AI, says Armstrong, seems to be the most likely scenario for complete human extinction, thereby making it a higher threat.

Assuming that superintelligence is as great a threat as some propose, what exactly are the technological paths that get us to superintelligence? In his book, Bostrum examines and measures those methods that move us towards superintelligence. Paths such as evolutionary processes which, we believe, produced the intelligence that we now carry, when expanded perhaps through genetic enhancement or selection, may enhance our ability to think beyond what we can do now. Clearly evolution has done this before, albeit over the course of thousands of years, but it is possible that this could happen through evolutionary genetic algorithms. (Dewey, 2014) While, these paths, Bostrum says, are clearly feasible, the potential of “machines intelligence is vastly greater than that of organic intelligence.” (Bostrum, 2014)

035-1In “Transcendence”, a recent movie starring Johnny Depp, the plotline toyed with the idea of brain emulation, making the idea seem, well, possible. This idea of downloading enhancements to the brain plays into one of the most accepted theories of artificial intelligence, that the brain is essentially a computational device. With its one hundred billion neurons and accompanying synapses, the brain, on a second by second basis, is constantly processing information in and out through the conversion of nerve signals. As a result, some scientists like Stephen Hawking have been known to argue that “there is no reason to believe that the brain is the most intelligent computer.” (Dewey, 2014) Still, these are what Bostrum calls radical forms of intelligence amplifications.

For Daniel Dewey, Fellow at the Future of Humanity Institute, the University of Oxford, these so-called “radical” forms are actually quite radical. “The impacts of generally applicable AI or whole-brain emulation seem like they should probably be somewhat north of the impacts of the atomic bomb, the computer, or perhaps the industrial revolution as a whole,” said Dewey. Even looking at it from an economic standpoint, says Dewey, the idea of putting “human-like cognitive abilities into software is massively disruptive,” on a scale massively higher than we have ever seen before.

At present, we don’t know enough about the brain to sufficiently map all of its circuits or even completely understand how neurons and our other brain cells actually work. Plus the technology to scan and then interpret the brain, at this point, is insufficient to really  give us the detailed information needed to create an emulation, to say nothing of the computer power necessary to run the emulation. So we are safe….for now.

An additional path, improvements in software algorithms, is another route to super intelligence. Addressed in a recent TEDx Vienna Talk that he gave in November 2013, Dewey shares that in the history of algorithmic improvement just as much improvement has come from new software as from new hardware. As a result, we may not actually see any physical changes during an intelligence explosion, says Dewey, we may only see “a series of programs writing more capable programs.” He goes on to say that even though we don’t know exactly how fast these programs could progress, “this does mean that an intelligence explosion could happen at software speed, and in a self-contained way, and without needing new hardware.” As a result, says Dewey, making such programs far more capable at intellectual tasks than humans are. (Dewey D. , 2013)

While the idea of an intelligence explosion is strictly theoretical at this point, it is what tends to alarm most people when these issues are broached. Why? Because it could happen so fast, so quickly that we may not see it coming or even have a plan in place to manage it. Imagine machines replicating intelligently until they are smarter than us. Consider that they surround us in every aspect of our lives. This point, this moment is what is considered to be an “intelligence explosion,” something that James Barrat writes about in his book, Our Final Invention: Artificial Intelligence and the End of the Human Era.”

In the book, Barrat is pretty clear that he’s worried, really worried, about the continued unchecked development of artificial intelligence in our lives. He is convinced that most technologists, scientists and researchers believe that it is inevitable that machines will govern all of our lives. And they all have varying opinions on its value within our society. With the number of predictions that circulate, some even believe that it will happen during their lifetime. But dealing with the possibility that this is a risky event, well, those conversations are simply very few and far between. He believes that this “wait and see” attitude is harmful. Take a look around, he says, computers are everywhere.

“Computers already under-grid our financial system, and our civil infrastructure of energy, water and transportation. Computers are at home, in our hospitals, cars, and appliances. Many of these computers, such as those running buy-sell algorithms on Wall Street, work autonomously with no human guidance…We get more dependent every day. So far it’s been painless.” (Barrat, 2013)

Barrat believes that most intelligent systems are, by definition, self aware and because they are goal seeking, that they will seek to improve themselves based on the values that we program and the drives that they are given. (Barrat, 2013)

Stuart Armstrong believes, too, that this is our main area of focus since rapid manifestation of intelligence is really the main overall risk. “Even without increased intelligence,” said Armstrong, “AI could develop great power by simply copying itself (possibly millions of times), running those copies at great speed, training them in different areas, networking them together, etc.” Of course, where it will manifest itself is very unclear, says Armstrong. “I would bet, currently, on AI manifesting itself in the business world, just because that is where most of the money is being spent. But I wouldn’t rule out governmental experiments or a lucky academic with an interesting algorithm.”

In his book Barrat shares the philosophy of Steve Omohundro, mathematician, physicist and founder of the think tank, Self Aware Systems. Omonhundro believes that these systems are driven by four basic biological type drives that they will develop to avoid predictable problems. The first is one of efficiency, that a system will make the most of existing resources by modifying, writing, creating and balancing materials and structures. Secondly, systems will be mainly interested in self-preservation, meaning that they will use existing resources to determine whether or not to shut off or continue to make copies. Thirdly, a system will need to gather whatever resources they need to achieve their programmable goals, again with the same resources that we humans may need. Finally, a system will need creativity so that it can generate newer ways to be more efficient and to use resources more efficiently. These drives, Barrat reports, contribute to a system’s self- preservation, alter- natively, making it harder for mankind especially if it gets in the machine’s way. (Barrat, 2013)

In his book, Stuart Armstrong goes just a little bit farther, saying that “AI will eventually be able to predict any move we make and could spend a lot of effort manipulating those who have ‘control’ over it.” He goes on to say that, to make AI safe, we have to program it to be safe. “We need to do this explicitly and exhaustively; there are no shortcuts to avoid the hard work. But it gets worse: it seems we need to solve nearly all of moral philosophy in order to program a safe AI.” (Armstrong, 2014)

How do you program AI to be safe especially when it reaches the superintelligence measure? Well, you have to do it before you get to superintelligence, but the question is still how, says Nick Bostrum. “We want AI that is safe, beneficial, and ethical, but we don’t know exactly what that entails,” said Bostrum. “If we look back on earlier historical epochs, we see blind spots in their moral awareness—in the practice of slavery and human sacrifice, for instance, or the condon- ing of manifold forms of brutality and oppression that would outrage the modern conscience.” We might like to think, says Bostrum, that we have learned from these experiences but he says “surely it is far more probable that we still labor under some grave misconception and that our understanding of our potential for realizing value remains incomplete.” Creating a hard fast set of unalterable rules to govern our futures, says Bostrum, may just, “cement our present errors in place.” One way to escape this predicament, Bostrum proposes, is “indirect normativity.” Indirect normativity means that instead of having to spell-out the optimum safe scenario, “we would build the AI’s motivation system in such a way that it con- tains a pointer to our values,” said Bostrum.

What this essentially means is that we would give AI the final goal of doing what we would have asked it to do if we had achieved perfection or as Bostrum puts it, “utopia.” Then, says Bostrum, superintelligence would use its superior abilities to estimate this specified ideal while still letting human values define what actually counts as a solution.

Other courses of constraint would be, Chalmers suggests in his paper that to design artificial intelligence in a virtual world by implementing systems separately so we can test in an environment that is not real. Chalmers suggests “one sort of process simulating the physics of the world, and another sort of process simulating agents within the world.” Then it might be possible to study artificial intelligence and superintelligence without worrying about entering into the intelligence explosion. (Chalmers, 2010)

Some researchers propose constraints like building AI systems incrementally, and testing as they go. Daniel Dewey talks about acting incrementally to avoid misuse and by this he means that those scientists and engineers who want to help deal with the possible catastrophic risk of AI can do so by focusing for relatively short periods of time on one aspect of a problem or another. This allows for the making of progress, says Dewey , that future scientists and engineers can build upon, thus eventually solving the problem in ways they might not originally be anticipated. “For those of us more invested in the problem [like the work at the Machine Intelligence Research Institute] in the long term, there will be more broad strategic work to do,” said Dewey. “It is still valuable to take a small part of the problem and see what concrete progress we can make on it now. The main point is that we don’t have to anticipate everything about how the problem is eventually solved in order to make progress now.”

The idea of “Friendly AI” as a proposed solution gives systems an algorithm that anticipates potential risk and then confines the risk to protect humankind. Coined by Eliezer Yudkowsky, a research fellow at the Machine Intelligence Research Institute, the term “Friendly AI,” is the type of artificial intelligence that will “preserve humanity and our values forever.” (Barrat, 2013) Outlined in his paper, Creating Friendly AI: The Analysis and Design of Benevolent Goal Architecture, Friendly AI is basically defined as artificial intelligence that, despite its goals and despite the number of times it can self improve, continues to exist and is neither hostile nor ambivalent towards humans—a sort of Asimovian ideal. Of course, that understanding must evolve with us, changing as we change and anticipating those changes. (Barrat, 2013) So in essence, the intention is to basically mimic a set of behaviors that you and I would deem “friendly.” Of course, it most certainly must be created after the first point that AI becomes “computationally feasible.” (Yudowsky, 2001)

But how do we define exactly what is ”friendly” behavior? Ask any futurist or even someone on the street, all their definitions will be different. So the
question becomes… what sort of quantifiable way of measuring can we employ to make sure that Friendly AI is, in fact, friendly? For Stuart Armstrong, the issue is basically a “constraint problem,” meaning that we are much clearer about the direction of risk rather than about the magnitude of risk that we will face. “We can take a lot of actions (ensure that AI maintains a stable goal structure, improve how it deals with copies of itself, constrain it physically, etc.) that we are pretty sure will decrease the risk, but it’s very hard to say what the final risk would be,” said Armstrong. “So for the foreseeable future, we’ll be talking about making AI ‘safer’ rather than ‘safe.’”

Finally, Nick Bostrum uses the term “common good principle” when it comes to designing safe superintelligence. Under this principle, superintelligence would only be developed for the “benefit of humanity and in the service of widely shared ethical ideals.” The idea is that widespread adoption would mean that everyone would share in the benefits of what superintelligence has created. Bostrum sees it being adopted as a voluntary moral commitment within organizations involved in machine intelligence and then endorsed by other entities ultimately becoming law or treatise. (Bostrum, 2014)

Interestingly, right now, not many folks are concerned about the prospect of artificial intelligence or superintelligence. We like our devices and their ease and convenience. Mostly, we are enjoying the benefits, oblivious to the potential for risk.

Unfortunately, few voices, especially from the AI community, are expressing concerns. Daniel Dewey thinks that we are seeing very little concern because those in the position to be concerned, futurists, scientists, engineers, either don’t believe catastrophic AI risk is even likely or they believe that it is too far in the future to even worry about it. Stuart Armstrong thinks that concern will happen when someone tells us to be concerned, but says that AI has an extremely high uncertainty. “The risk will happen at some time, possibly without a slow ramp-up period,” said Armstrong. Bostrum, despite this lack of discussion, appears hopeful. “There is now a very small but growing group of researchers who are starting to do serious research on some of these issues,” said Bostrum. Hopefully, soon, the idea of machine intelligence becoming super in nature will spark some interest; will spark some concern, from the community of AI researchers, engineers and futurists who are tumbling down the proverbial hill without knowing exactly what they may hit at the bottom.

Bibliography
Armstrong, S. (2014). Smarter than Us: The Rise of Machine Intelligence. Berkley, CA: Machine Intelligence Research Institute.
Augenbraun, E. (2014, August 4). Elon Musk: Artificial Intelligence may be more dangerous than nukes. Retrieved from CBS News: http://www. cbsnews.com/news/elon-musk-artificial-intelli-gence-may-be-more-dangerous-than-nukes/
Barrat, J. (2013). Our Final Invention: Artificial Intelligence and the End of the Human Era. New York: Thomas Dunne Books/St. Martin’s Press.
Bostrum, N. (2014). Superintelligence: Paths, Dangers and Strategies. Oxford University Press. Chalmers, D. J. (2010). The Singularity: A Philosophical Analysis. Journal of Consciousness Stuides, 7-65.
Dewey, D. (2013, November 2). Future of Humanity, Oxford University. Retrieved from Tedx Vienna featuring Daniel Dewey: http://www. danieldewey.net/tedxvienna.html
Dewey, D. a.-B. (2014, July 18). Explainer: What is Superinteligence? Retrieved from The Conversation: http://theconversation.com/explainer-what-is-superintelligence-29175
Yudowsky, E. (2001). Friendly AI 1.0: The Analysis and Design of Benevolent Goal. Machine Intelligence Research Institute.

Interviews
Nick Bostrum. Email interview, September 22 nd September 24, 2014.
Stuart Armstrong. Email interview, September 19, 2014.
Daniel Dewey. Email interview, September 23, 2014.

Leave a Reply

Your email address will not be published. Required fields are marked *