Superintelligence: Paths, Dangers, Strategies

by Nick Bostrom

First posted October 2016

Rare the day I respect so fully a book I don’t actually like. Many small pieces from this stuck in my mind, though I still disagree with the central tenets. If I wrote a non-fiction book about an unsettled topic I would extremely want to influence people that didn’t walk in hungry to agree with me.


It is a truly impressive feat to alienate a reader with your fundamental hypothesis and still create a book said reader wants to continue to read. I virulently disagreed—even after finishing—with most of the presuppositions within Superintelligence: Paths, Dangers, Strategies. Often, this type of foundational disjointedness compels me to contemptfully spite-read something with extreme care so as to more fully pick it apart. In this case—though I continued to disagree often and wholeheartedly—I was genuinely interested in what the book had to say, and how it said it.

What it had to say is this: we, as a human species, are going to make machine intelligence. That intelligence will eventually be “smarter” than us. If we don’t plan for that, it could wipe us out, and risk of it happening is greater than it not. That is why I strain against the functional baseline of the text:

Without knowing anything about the detailed means that a superintelligence would adopt, we can conclude that a superintelligence—at least in the absence of intellectual peers and in the absence of effective safety measures arranged by humans in advance—would likely produce an outcome that would involved reconfiguring terrestrial resources into whatever structures maximize the realization of its goals.

That is only the likely outcome because the book was written during the denouement of Western Civilization’s exploitation capitalism and we have known nothing but reconfiguring terrestrial resources since the 1500s.

There is dissonance in overlaying utilitarian-model economics onto AI, positing an uber-capitalist state of consciousness as natural when it is no such thing; only a zeitgeist-fueled myopism that adds inevitable—if inaccurate—weight to the continuing apocrypha surrounding the quote, "It is easier to imagine the end of the world than the end of capitalism." Pithy, yet understandable when Randian techno-mantic inevitability creeps in to everything:

Our demise may instead result from the habitat destruction that ensues when the AI begins massive global construction projects using nanotech factories and assemblers—construction projects which quickly, perhaps within days or weeks, tile all of the Earth’s surface with solar panels, nuclear reactors, supercomputing facilities with protruding cooling towers, space rocket launchers, or other installations whereby the AI intends to maximize the long-term cumulative realization of its values.

When, “It is important not to anthropomorphize superintelligence when thinking about its potential impacts,” is tossed off frequently, perhaps one shouldn’t anthropomorphize superintelligence at all, let alone as some kind of uber-douche. If AI gains sentience, the theory goes, it will use its superintelligence to manufacture successive, “improved” AI with great rapidity. Many humans now are pretty leery of genetically modifying crops, let alone modifying the human genome. So we don’t modify humans, even though it is more technologically possible now to make people “better” on a genomics level. Superintelligence even provides a cogent, if conspiracy-minded, rationale for why it all might go pear-shaped if we try to "maximize" ourselves:

Some countries might offer inducements to encourage their citizens to take advantage of genetic selection in order to increase the country’s stock of human capital, or to increase long-term social stability by selecting for traits like docility, obedience, submissiveness, conformity, risk-aversion, or cowardice, outside of the ruling clan.

That statement is my most reviled in the text; it shows the "people as units of labor" underpinning to Superintelligence that is so reductive that only seems smart if you're trying to fit the world into an equation or predictive model.

So we are forced to assume AI would make itself obsolete instantly upon reaching superintelligence—anything else would be anthropomorphizing it. Yet AI would also want to propagate itself across the cosmic endowment. That makes self-preservation—not wanting to replace ourselves with genetically engineered superhumans—a weakness inherent to biological creatures, but species propagation—seizing the cosmic endowment—a right desire shared by all sentience.

All of theses assumptions did not thrill me while I was reading Superintelligence; it read like Gordon Gekko rewrote the plot to Terminator 2: Judgment Day. All the best minds of our generation are out there thinking up new ways to convince people to click on shiny images, so I am supposed to accept that an actual AI superintelligence would be the same brand of empty suit, only able to quote Tucker Max a billion time faster while also checking its financial holdings? No, an AI that reached singularity might save the pangolin, or plant trees, or worship the Buddha. The assumption that an AI is going to terraform the planet into its own personal playground and extinguish humanity just because that’s what our society’s most selfish, paranoid, Johnny-von-Neumann-inspired game theorist lunatics might do is so beyond hypocritical that it made me almost stop reading this book.

But all the stupid Chicago School of Economics bullshit concepts—nearly all of which I have had to spot and avoid during my time in a Midwestern law school—can’t detract from how smart the text reads. Awesome stuff like this happens:

The speed of light becomes an increasingly important constraint as minds get faster, since faster minds face greater opportunity costs in the use of their time for traveling or communicating over long distances. Light is roughly a million times faster than a jet plane, so it would take a digital agent with a mental speedup of 1,000,000x about the same amount of subjective time to travel across the globe as it does a contemporary human journeyer.

You’re not getting these details anywhere else—this is thoughtful, fascinating, insightful details into a theoretical realm of possibility that should excite every living person on the planet. What a trip, just to consider the subjective time-dilation increased mental processing speed would engender; that near light-speed travel might feel to an AI what air travel does to me and you gives me chills.

This is what I mean when I call Superintelligence an impressive feat; I cannot name another book that spits out so much irksome social theory that I would still recommend without caveat. The chains of logic are so clear and smart; it crafts a space to dislike the premise yet love the process. And—as the book itself makes clear—it may believe what it posits, but it doesn’t need you to; Superintelligence just wants people to start talking about the issue:

It may be reasonable to believe that human-level machine intelligence has a fairly sizeable chance of being developed by mid-century, and that it has a non-trivial chance of being developed considerably sooner or much later; that is might perhaps fairly soon thereafter result in superintelligence; and that a wide range of outcomes may have a significant chance of occurring, including extremely good outcomes and outcomes that are as bad as human extinction. At the very least, they suggest that the topic is worth a closer look.

AI social control is worth a look, as is this book. Even if you, like me, do not agree with basically any of the negative proscriptive baselines, you will still learn things. They may not be party-trick bon mots or hard facts and figures to plunk into your next PowerPoint, but you will learn a system to interpret your own thoughts about Artificial Intelligence. Once I began to appreciate the style of Superintelligence, my previous nit-picking fell away; I stopped reading closely in preparation for an eviscerating review and began reading closely for the sake of the text itself:

One sympathizes with John McCarthy, who lamented: “As soon as it works, no one calls it AI anymore.

Superintelligence, then, cannot be called AI. Because it works.