Superintelligence - Critical summary review - Nick Bostrom
×

New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!

I WANT IT! 🤙
70% OFF

Operation Rescue is underway: 70% OFF on 12Min Premium!

New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!

1632 reads ·  0 average rating ·  0 reviews

Superintelligence - critical summary review

Superintelligence Critical summary review Start your free trial
Technology & Innovation

This microbook is a summary/original review based on the book: Superintelligence: Paths, Dangers, Strategies

Available for: Read online, read in our mobile apps for iPhone/Android and send in PDF/EPUB/MOBI to Amazon Kindle.

ISBN: 1501227742

Publisher: Oxford University Press

Critical summary review

“AI,” the acronym for artificial intelligence, is currently the hottest buzzword in tech. But is AI the future of our species, or its inevitable demise? Swedish philosopher Nick Bostrom investigates this question thoroughly in “Superintelligence,” one of the finest books on the subject. So, get ready to discover whether superintelligent robots will roam the earth sometime during your lifetime and, if so, what should programmers do now to prevent them from annihilating humanity in the future!

Growth modes and big history

For long periods of time, humanity developed at an extremely slow pace. “A few hundred thousand years ago,” says Bostrom, putting things into perspective, “growth was so slow that it took on the order of one million years for human productive capacity to increase sufficiently to sustain an additional one million individuals living at subsistence level. By 5.000 B.C., following the Agricultural Revolution, the rate of growth had increased to the point where the same amount of growth took just two centuries. Today, following the Industrial Revolution, the world economy grows on average by that amount every ninety minutes.” 

This kind of growth rate seems fantastic by any standard. Just three centuries ago, labor was almost exclusively manual; nowadays, there are robots manufacturing self-driving electric vehicles 24 hours a day. Ever since its advent about six decades ago, computing technology has been following the famous Moore’s law, doubling in power and performance every 18 months or so. As famous physicist Michio Kaku recently pointed out, that has led to your smartphone today having more computer power than all of NASA did back in 1969, when it put two astronauts on the moon! 

In 1958, an article published by the IBM Journal of Research and Development stated, unequivocally, that “if one could devise a successful chess machine, one would seem to have penetrated to the core of human intellectual endeavor.” Just four decades later, IBM’s Deep Blue defeated the world chess champion Garry Kasparov. Nowadays, there exists a self-trained computer program called AlphaZero that can soundly beat every traditional chess engine, each of them substantially better than Deep Blue. Even more frighteningly, IBM also developed a question-answering computer system called Watson that has already defeated the two all-time-greatest human “Jeopardy!” champions, Ken Jennings and Brad Rutter.

Though this fast exponential trend of growth will inevitably plateau once fundamental physical limits are reached, there is so much room for development and advances in computing technology that it’s difficult not to ask the following two questions: will machines surpass humans in general intelligence? If yes, what will happen next – will artificial agents save or destroy us?

Hope, despair and great expectations

There is a big difference between what IBM’s Deep Blue and Watson are capable of and what humans can do. As great as these two machines have proven themselves to be at chess and Jeopardy, they are incapable of doing practically anything else. Unlike them, humans have a broad mental capacity and can acquire and apply knowledge and skills in a variety of areas. This is called “general intelligence” and is something all humans possess. The same cannot be said of even the most advanced humanoid robots in existence today. 

So, when one speaks of AI general intelligence (or AGI) one still speaks of something entirely hypothetical. It is even less real to speak of superintelligence, which Bostrom defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” Yet, debates about superintelligent machine brains are hardly new or recent. Science fiction or speculative literature aside, one can trace their beginnings to a 1956 six-week summer workshop at Dartmouth, organized by John McCarthy – the man who coined the term “artificial intelligence” – and attended by about 20 mathematicians and scientists who are now widely recognized as founding figures of the field. 

A lot of things have happened since then, with the field experiencing several seasons of hope followed by seasons of despair. After a few failures and unkept promises, for most of the 1970s and in the beginning of the 1990s, interest in AI dwindled to the point of disregard. However, recent developments in neural networks and deep learning have made many researchers quite optimistic about the prospects of superintelligence. According to a 2009 survey, experts feel there is a 90% probability that “human-level machine intelligence” (HLMI) will be developed by 2075. If so, our children may have to live in a world where machines are smarter than them. The problem? 

Well, humans evolved to be apex predators and the dominant species in the animal kingdom not because they are faster or stronger than, say, gorillas, lions and whales, but because they are much smarter than them. Hence, when machines surpass our children in general intelligence, will the fate of our children depend on themselves or on a few algorithms our generation has developed? After all, if humans decide there should be no gorillas and lions, there’s nothing the gorillas and lions can do to stop us from exterminating them. Are we heading toward a “Terminator”-like, humanless future? If so, when is Judgment Day? And where are all the John Connors? 

Paths to superintelligence

Before we turn to speculations about what we should do to stop superintelligent robots from destroying the human race, let us see if it is at all possible for us to create such superintelligent machines. There are several conceivable paths that can lead humanity there:

  • Artificial intelligence (AI). Over a period of millions of years, sheer blind evolution produced human-level general intelligence. With foresight, intelligent human programmers should be able to produce AI much faster. An extremely simple neuron model uses about 1.000 floating-point operations per second (FLOPS) to simulate one neuron in real time. One of humanity’s fastest supercomputers, China’s Tianhe-2 (tee-AH-knee) , currently provides about 3.39×1016 FLOPS. However, to recapitulate the entire human evolution within the span of a single year, we would need to build a supercomputer in the range of 1031–1044 FLOPS.  So, we’re still not there, but we’re getting closer and closer. In fact, two highly respected AI pioneers, David Chalmers and Hans Moravec, argue that “human-level AI is not only theoretically possible but feasible within this century.”
  • Whole brain emulation (WBE). In the case of AI, intelligent humans try to devise algorithms that would simulate the evolution of the human brain, as best exemplified in Alan Turing’s 1950 thought experiment: “Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child’s?” In what is called whole brain emulation, also known as “uploading,” programmers try to instead replicate the exact computational structure of a biological brain. While AI strives to be a model analogous to human intelligence, as far as inspiration goes, WBE is “barefaced plagiarism.” However, WBE has an advantage over AI since it doesn’t require scientists to understand the inner workings of the brain – it just needs them to duplicate all of its neurons and their interneural connections.
  • Biological conditions. A third path to superintelligence may be the enhancement of our current brains. There are many ways humanity can achieve this: better nutrition, smart drugs that improve memory and concentration, and even “Gattaca”-like embryo selection for preferable traits. Genetic engineering, CRISPR modifications and even human reproductive cloning may develop far faster than AI or WBE, and make humans superintelligent long before machines achieve this. 
  • Brain-computer interfaces. There is also the very real possibility of humans improving their biological brains by implanting machine parts, a process that can be justly referred to as “cyborgization.” However, there are currently significant risks of medical complications when implanting electrodes in the brain, so it’s highly doubtful that brain-computer interfacing will bring us to superintelligence before other methods do.
  • Networks and organizations. The final path toward superintelligence is “through the gradual enhancement of networks and organizations that link individual human minds with one another and with various artifacts and bots.” Bostrom calls this “collective superintelligence” and feels that it will make a difference in our lives sooner, but in smaller ways than biological enhancements or brain-computer interfaces would. Most importantly, he doesn’t feel it promises anything as earth-shattering as AI and WBE.

The dangers of AI: the control problem

Superintelligence will come in one of two ways: either in the form of a whole brain emulation or artificial intelligence. In any event, it will have an almost immediate impact. The scenario was first envisioned by Alan Turing’s colleague, mathematician I. J. Good. 

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever,” he wrote in 1965. “Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus, the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” The stipulation is what worries Bostrom the most. 

Namely, regardless of what goals human programmers set for this hypothetical superintelligent brain, once it becomes functional, it will inevitably (and quite spontaneously) generate a number of natural subgoals, such as self-preservation, goal-content integrity, cognitive enhancement, technological perfection, and resource acquisition. For example, if we give an AI agent the sole objective of maximizing the manufacture of paperclips, what would prevent this agent, once it becomes smarter than us, from acquiring all the world’s resources to manufacture heaps of unnecessary paperclips? 

This is the essence of the “AI control problem,” which can be summed up in a simple question: is it possible to engineer a controlled detonation? Solving this problem with regards to AI is extremely difficult, because most of the things that stop us from destroying each other – decency, kindness, compassion, happiness – cannot be translated into machine-implementable codes. And even if they could one day, what would guarantee the absence of unforeseeable consequences? After all, in all stories that feature genies, granted wishes tend to produce undesired effects for the one who makes them. That’s why, the AI control problem begs the following all-important question: if we haven’t found a way to control the detonation beforehand, should we continue playing with the bomb?

The unfinished fable of sparrows

Rather than going over the many possible scenarios and strategies with Bostrom, let us end our summary where he begins his book: with his cautionary and oft-quoted “Unfinished Fable of the Sparrows.” Just like all good fables, it simultaneously says nothing about the real subject matter and everything that really needs to be said about it.

During one nest building season, a group of sparrows debates the benefits of raising an owl chick as their servant. “Imagine how easy life would be with an owl here!” one of the sparrows says. “It could look after both our young and the elderly, and could help us build our nests. It could even keep an eye out for the neighborhood cat!” Excited at the proposal, Pastus, the elder-bird, immediately sends sparrow-scouts in all directions of the world to try and find an owl egg or an abandoned owlet. “This could be the best thing that ever happened to us,” he chirps in pleasure as soon as the sparrow-scouts head out.  

There is only one voice of dissent: Scronkfinkle, “a one-eyed sparrow with a fretful temperament.” Turning to Pastus, he asks the elder-sparrow an all-important question: “Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?” Pastus is unconvinced by the hesitation. “Taming an owl sounds like an exceedingly difficult thing to do,” he replies. “It will be difficult enough to find an owl egg. So, let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”

“There is a flaw in that plan,” shrieks Scronfin kle, but his protests fall on deaf ears. While the sparrow-scouts are trying to find the owl egg, the remaining sparrows begin working out a plan on how owls might be tamed or domesticated. It’s a difficult job, they realize in the midst of their discussion, and the absence of an actual owl makes it rather impossible. However, at this point, they have no choice but to press on as best as they can, since the sparrow-scouts may return with an owlet any minute now. “It is not known how the story ends,” Bostrom comments ominously here, adding that he dedicates his book to Scronkfinkle and his followers. We dedicate our summary to them as well.

Final notes

Recommended by everyone from Bill Gates to Elon Musk, “Superintelligence” is a really outstanding book that covers so much ground in its 400 densely populated pages, there are really just a few AI-related books you’ll need to read besides it. 

Moreover, it is a very timely book on a very timely subject. “If this book gets the reception that it deserves,” wrote in a review mathematician Olle Haggstorm, “it may turn out the most important alarm bell since Rachel Carson's ‘Silent Spring’ from 1962, or ever.”

It’s your job to make sure Bostrom’s book will earn this standing.

12min tip

Try to be Scronfinkle. Never Pastus

Sign up and read for free!

By signing up, you will get a free 7-day Trial to enjoy everything that 12min has to offer.

Who wrote the book?

Nick Bostrom is a Swedish-born philosopher and polymath. He is a professor at Oxford University and the founding director of the Future of Humanity Institute. One of the most respected global thinkers, Bostrom is best k... (Read more)

Start learning more with 12min

6 Milllion

Total downloads

4.8 Rating

on Apple Store and Google Play

91%

of 12min users improve their reading habits

A small investment for an amazing opportunity

Grow exponentially with the access to powerful insights from over 2,500 nonfiction microbooks.

Today

Start enjoying 12min's extensive library

Day 5

Don't worry, we'll send you a reminder that your free trial expires soon

Day 7

Free Trial ends here

Get 7-day unlimited access. With 12min, start learning today and invest in yourself for just USD $4.14 per month. Cancel before the trial ends and you won't be charged.

Start your free trial

More than 70,000 5-star reviews

Start your free trial

12min in the media