New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!
I WANT IT! 🤙Operation Rescue is underway: 70% OFF on 12Min Premium!
New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!
This microbook is a summary/original review based on the book: Superintelligence: Paths, Dangers, Strategies
Available for: Read online, read in our mobile apps for iPhone/Android and send in PDF/EPUB/MOBI to Amazon Kindle.
ISBN: 1501227742
Publisher: Oxford University Press
“AI,” the acronym for artificial intelligence, is currently the hottest buzzword in tech. But is AI the future of our species, or its inevitable demise? Swedish philosopher Nick Bostrom investigates this question thoroughly in “Superintelligence,” one of the finest books on the subject. So, get ready to discover whether superintelligent robots will roam the earth sometime during your lifetime and, if so, what should programmers do now to prevent them from annihilating humanity in the future!
For long periods of time, humanity developed at an extremely slow pace. “A few hundred thousand years ago,” says Bostrom, putting things into perspective, “growth was so slow that it took on the order of one million years for human productive capacity to increase sufficiently to sustain an additional one million individuals living at subsistence level. By 5.000 B.C., following the Agricultural Revolution, the rate of growth had increased to the point where the same amount of growth took just two centuries. Today, following the Industrial Revolution, the world economy grows on average by that amount every ninety minutes.”
This kind of growth rate seems fantastic by any standard. Just three centuries ago, labor was almost exclusively manual; nowadays, there are robots manufacturing self-driving electric vehicles 24 hours a day. Ever since its advent about six decades ago, computing technology has been following the famous Moore’s law, doubling in power and performance every 18 months or so. As famous physicist Michio Kaku recently pointed out, that has led to your smartphone today having more computer power than all of NASA did back in 1969, when it put two astronauts on the moon!
In 1958, an article published by the IBM Journal of Research and Development stated, unequivocally, that “if one could devise a successful chess machine, one would seem to have penetrated to the core of human intellectual endeavor.” Just four decades later, IBM’s Deep Blue defeated the world chess champion Garry Kasparov. Nowadays, there exists a self-trained computer program called AlphaZero that can soundly beat every traditional chess engine, each of them substantially better than Deep Blue. Even more frighteningly, IBM also developed a question-answering computer system called Watson that has already defeated the two all-time-greatest human “Jeopardy!” champions, Ken Jennings and Brad Rutter.
Though this fast exponential trend of growth will inevitably plateau once fundamental physical limits are reached, there is so much room for development and advances in computing technology that it’s difficult not to ask the following two questions: will machines surpass humans in general intelligence? If yes, what will happen next – will artificial agents save or destroy us?
There is a big difference between what IBM’s Deep Blue and Watson are capable of and what humans can do. As great as these two machines have proven themselves to be at chess and Jeopardy, they are incapable of doing practically anything else. Unlike them, humans have a broad mental capacity and can acquire and apply knowledge and skills in a variety of areas. This is called “general intelligence” and is something all humans possess. The same cannot be said of even the most advanced humanoid robots in existence today.
So, when one speaks of AI general intelligence (or AGI) one still speaks of something entirely hypothetical. It is even less real to speak of superintelligence, which Bostrom defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” Yet, debates about superintelligent machine brains are hardly new or recent. Science fiction or speculative literature aside, one can trace their beginnings to a 1956 six-week summer workshop at Dartmouth, organized by John McCarthy – the man who coined the term “artificial intelligence” – and attended by about 20 mathematicians and scientists who are now widely recognized as founding figures of the field.
A lot of things have happened since then, with the field experiencing several seasons of hope followed by seasons of despair. After a few failures and unkept promises, for most of the 1970s and in the beginning of the 1990s, interest in AI dwindled to the point of disregard. However, recent developments in neural networks and deep learning have made many researchers quite optimistic about the prospects of superintelligence. According to a 2009 survey, experts feel there is a 90% probability that “human-level machine intelligence” (HLMI) will be developed by 2075. If so, our children may have to live in a world where machines are smarter than them. The problem?
Well, humans evolved to be apex predators and the dominant species in the animal kingdom not because they are faster or stronger than, say, gorillas, lions and whales, but because they are much smarter than them. Hence, when machines surpass our children in general intelligence, will the fate of our children depend on themselves or on a few algorithms our generation has developed? After all, if humans decide there should be no gorillas and lions, there’s nothing the gorillas and lions can do to stop us from exterminating them. Are we heading toward a “Terminator”-like, humanless future? If so, when is Judgment Day? And where are all the John Connors?
Before we turn to speculations about what we should do to stop superintelligent robots from destroying the human race, let us see if it is at all possible for us to create such superintelligent machines. There are several conceivable paths that can lead humanity there:
Superintelligence will come in one of two ways: either in the form of a whole brain emulation or artificial intelligence. In any event, it will have an almost immediate impact. The scenario was first envisioned by Alan Turing’s colleague, mathematician I. J. Good.
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever,” he wrote in 1965. “Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus, the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” The stipulation is what worries Bostrom the most.
Namely, regardless of what goals human programmers set for this hypothetical superintelligent brain, once it becomes functional, it will inevitably (and quite spontaneously) generate a number of natural subgoals, such as self-preservation, goal-content integrity, cognitive enhancement, technological perfection, and resource acquisition. For example, if we give an AI agent the sole objective of maximizing the manufacture of paperclips, what would prevent this agent, once it becomes smarter than us, from acquiring all the world’s resources to manufacture heaps of unnecessary paperclips?
This is the essence of the “AI control problem,” which can be summed up in a simple question: is it possible to engineer a controlled detonation? Solving this problem with regards to AI is extremely difficult, because most of the things that stop us from destroying each other – decency, kindness, compassion, happiness – cannot be translated into machine-implementable codes. And even if they could one day, what would guarantee the absence of unforeseeable consequences? After all, in all stories that feature genies, granted wishes tend to produce undesired effects for the one who makes them. That’s why, the AI control problem begs the following all-important question: if we haven’t found a way to control the detonation beforehand, should we continue playing with the bomb?
Rather than going over the many possible scenarios and strategies with Bostrom, let us end our summary where he begins his book: with his cautionary and oft-quoted “Unfinished Fable of the Sparrows.” Just like all good fables, it simultaneously says nothing about the real subject matter and everything that really needs to be said about it.
During one nest building season, a group of sparrows debates the benefits of raising an owl chick as their servant. “Imagine how easy life would be with an owl here!” one of the sparrows says. “It could look after both our young and the elderly, and could help us build our nests. It could even keep an eye out for the neighborhood cat!” Excited at the proposal, Pastus, the elder-bird, immediately sends sparrow-scouts in all directions of the world to try and find an owl egg or an abandoned owlet. “This could be the best thing that ever happened to us,” he chirps in pleasure as soon as the sparrow-scouts head out.
There is only one voice of dissent: Scronkfinkle, “a one-eyed sparrow with a fretful temperament.” Turning to Pastus, he asks the elder-sparrow an all-important question: “Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?” Pastus is unconvinced by the hesitation. “Taming an owl sounds like an exceedingly difficult thing to do,” he replies. “It will be difficult enough to find an owl egg. So, let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”
“There is a flaw in that plan,” shrieks Scronfin kle, but his protests fall on deaf ears. While the sparrow-scouts are trying to find the owl egg, the remaining sparrows begin working out a plan on how owls might be tamed or domesticated. It’s a difficult job, they realize in the midst of their discussion, and the absence of an actual owl makes it rather impossible. However, at this point, they have no choice but to press on as best as they can, since the sparrow-scouts may return with an owlet any minute now. “It is not known how the story ends,” Bostrom comments ominously here, adding that he dedicates his book to Scronkfinkle and his followers. We dedicate our summary to them as well.
Recommended by everyone from Bill Gates to Elon Musk, “Superintelligence” is a really outstanding book that covers so much ground in its 400 densely populated pages, there are really just a few AI-related books you’ll need to read besides it.
Moreover, it is a very timely book on a very timely subject. “If this book gets the reception that it deserves,” wrote in a review mathematician Olle Haggstorm, “it may turn out the most important alarm bell since Rachel Carson's ‘Silent Spring’ from 1962, or ever.”
It’s your job to make sure Bostrom’s book will earn this standing.
Try to be Scronfinkle. Never Pastus
By signing up, you will get a free 7-day Trial to enjoy everything that 12min has to offer.
Nick Bostrom is a Swedish-born philosopher and polymath. He is a professor at Oxford University and the founding director of the Future of Humanity Institute. One of the most respected global thinkers, Bostrom is best k... (Read more)
Total downloads
on Apple Store and Google Play
of 12min users improve their reading habits
Grow exponentially with the access to powerful insights from over 2,500 nonfiction microbooks.
Start enjoying 12min's extensive library
Don't worry, we'll send you a reminder that your free trial expires soon
Free Trial ends here
Get 7-day unlimited access. With 12min, start learning today and invest in yourself for just USD $4.14 per month. Cancel before the trial ends and you won't be charged.
Start your free trialNow you can! Start a free trial and gain access to the knowledge of the biggest non-fiction bestsellers.