Optimum Power: Sustaining Our Power Over Time
Want to see Richard’s video commentary for this chapter (and get access to webinars too)? Upgrade your Power package.
The book so far has laid out how humanity has overpowered the rest of nature, and that the only solution for our survival is to limit our own power. Chapter six argues that this is actually possible—at least in principle.
It always has been so. The grievances of those who have got power command a great deal of attention; but the wrongs and the grievances of those people who have no power at all are apt to be absolutely ignored. That is the history of humanity right from the beginning.― Various, The Suffragettes
He who reigns within himself, and rules passions, desires, and fears, is more than a king.― John Milton
In 1964, Soviet astronomer Nikolai Kardashev proposed a scale for measuring a civilization’s level of technological advancement based on the amount of power it can command. The scale has three levels:
- Type I civilization, also called planetary civilization, can use all of the power reaching its planet from its closest star. Estimated to be in the range of 1016 to 1017 watts, this amount of power would be three or four orders of magnitude more than humans are currently able to wield (2×1013 watts).
- Type II civilization, also called stellar civilization, can control power at the scale of its solar system (the sun’s power is about 4×1026 watts).
- Type III civilization, or galactic civilization, can control power at the scale of its entire galaxy (the Milky Way galaxy shines at 4×1037 watts).
Theoretical physicist and futurist Michio Kaku has suggested that humans may attain Type I status in a century or two, if our technological advancement continues apace and we don’t blow ourselves up or destroy our planet in the meantime. Achieving Type II status would take us a few thousand years, and Type III status up to a million years.1
If the Kardashev scale presents a visionary extrapolation of evolution and astrophysics, the Fermi Paradox offers a more pessimistic take on our future. Proposed by Italian-American physicist Enrico Fermi in 1950 during conversations with fellow physicists, this paradox consists of the contradiction between the high statistical likelihood that many other technologically advanced civilizations exist in the universe, and the lack of any reliable observations to indicate the presence of even one such civilization. Fermi’s reported question was, “Where is everybody?”2
Scores of scientists from various disciplines have spent significant effort attempting to explain the paradox. Wikipedia lists 23 solutions that have been published by one or another scientist or team of scientists. The solution (among those 23) that fits best with the story of human power developed in this book is summarized by the dismal sentence, “It is the nature of intelligent life to destroy itself.” If there are other intelligent species that rise to planetary dominance by way of fossil fuels (which was the only realistic pathway available to Homo sapiens), they may do essentially what we are doing—they will undermine their planetary climate and deplete their fossil energy resources. There’s no one else out there because other technologically advanced species have either self-destructed, or have lost their ability to maintain advanced technology before they could use it to send out interstellar signals saying, “Hello! We’re here! Is anyone out there?”3
But there is another solution to the Fermi Paradox, one that’s not mentioned in the Wikipedia summary. Perhaps there are indeed at least some intelligent extraterrestrial species in the universe, but, rather than flaming out after seeking to maximize their power, they have instead found ways to limit their own power voluntarily. They’ve kept their power within bounds that would permit the long-term flourishing of healthy, diverse ecosystems on their home planets, so that they and the other beings they share their planets with can continue to thrive for millions of years. They have expressed their intelligence not by dominating ever-greater cosmic domains, but by developing their aesthetic creativity and spirituality, and they spend their days simply enjoying and caring for their small but beautiful worlds.
I’d argue that careful observation of nature suggests that these are the two most likely solutions to the Fermi paradox, and that they point to the most likely futures for our own species.4 Further, the two solutions are not mutually exclusive; the first could be a brief pathway to the second.
This chapter will make the case that we humans are capable of controlling our thirst for power, and we have a long history of doing so. That capacity is rooted in similar behavior expressed throughout nature. Indeed, we need a new concept, as significant in its way as the maximum power principle, to describe this universal tendency. I call it the optimum power principle, defined as the tendency of natural and human systems to sacrifice some measure of power in the present so as to maximize power over a longer period of time. It doesn’t contradict the maximum power principle; it merely adds the element of duration.
As we are about to see, power can be curbed by involuntary means: an organism or system of organisms can collide with natural limits and either be extinguished completely, or reorganize itself into a lower-power state that can be sustained. On the other hand, power can also be curbed voluntarily: an organism or system of organisms can, at least under some conditions, learn to anticipate limits and adapt its behavior to stay within them.
Let’s explore the involuntary curbing of power first.
Involuntary Power Limits: Death, Extinction, Collapse
Every organism faces an ultimate existential limit to all its powers, in the form of death. The very idea of death can be frightening and depressing. Upon death, an entire universe of perceptions, feelings, and actions coming to focus in a particular individual vanishes forever. Why would something so awful to contemplate be inevitable?
Evolution must have a good reason for death, and it’s not hard to see. In principle, there is no reason organisms couldn’t have evolved to be immortal. Actually, a few come close. The bacterium Deinococcus radiodurans can survive intense radiation, extreme cold, and corrosive acids. And tardigrades (known colloquially as water bears), a phylum of tiny water-dwelling eight-legged segmented micro-animals, seem immune to dehydration, high heat, and even the vacuum of space. They’ve survived all five mass extinctions.
However, in order to be immortal (or nearly so), organisms have to settle for extreme limits on their powers of motion and perception, and they probably need to remain tiny. Larger organisms and more sophisticated organs inevitably accumulate injury to their tissues over time. DNA sustains damage from natural (or human-manufactured) chemicals in the environment, or from cosmic radiation or copying errors. Cells sometimes divide incorrectly, proteins can misfold, and organisms can succumb to disease or injury. Nature’s strategy is to let cells and organisms eventually die, and thus to cede the opportunity of existence to their replacements. After all, if all organisms were immortal but still capable of reproduction, they would proliferate and accumulate to the point where all possible food sources would be consumed and there would be no space for anyone new. There’s just no getting around the necessity of death.5
Non-human organisms appear not to be aware of the inevitability of their own death, so they don’t have to cope with that awareness. A few intelligent animals (including crows and elephants) take note of the deaths of their comrades and appear to mourn them, but we don’t know if they are able to contemplate their own mortality. For us humans, though, usually beginning in late childhood, language and rational thought ensure that we inescapably know that everyone will die sooner or later, ourselves included. Thus, death is a big deal long before the event itself. A field of psychology known as Terror Management Theory addresses the psychological conflict between our self-preservation instinct and our knowledge of our own eventual demise, and thereby seeks to explain a wide array of cultural institutions that appear to promise immortality—including, but not limited to, religious beliefs and rituals.6 (We will return to Terror Management Theory later in this chapter, when we discuss the psychological basis of climate denial.)
The extinction of a species is a form of collective death, the passing of a whole way of being. Some species manage to hang around for a very long time: cyanobacteria have been here for 2.8 billion years, and horseshoe crabs for a respectable 450 million years. But the average mammalian species persists just one to ten million years. It’s likely that extinctions—especially mass extinctions—clear the way for the evolution of new life forms, but the precise role of extinction in evolution is still under investigation. All we can say for sure is that warm-blooded species like ours tend to persist for only a very few multiples of the timespan we humans have already been here.
Another natural power limit derives from predation. Nature keeps the population levels of organisms in check via predator-prey relationships, which form the warp and weft of the food web. Predators—including micropredators, in the forms of viruses and bacteria—limit the population of prey species, but a decline in the population of prey species (due to any cause, including overpredation) can lead to a fall in the population of predators. Typically, the abundance of prey and predators is characterized by cycles, with the population peaks of predators lagging those of prey.
Figure 6.1 Predator/prey dynamics.
Credit: Based on a famous chart in Eugene Odum, Fundamentals of Ecology (W. B. Saunders, 1959). Canadian data available at http://people.whitman.edu/~hundledr/courses/M250F03/M250.html.
As a way of introducing a few concepts in population ecology that we’ll find useful in a moment, let’s consider one example—the field mouse, or vole. Its numbers in any given area vary according to the relative abundance of its food (typically small plants), which in turn depends on climate and weather. The local vole population size also depends on the numbers of its predators—which include foxes, raccoons, hawks, and snakes. A wet year can result in abundant plant growth, which temporarily increases the land’s carrying capacity for voles, allowing the vole population to grow. This growth trend is likely to overshoot the vole population level that can be sustained in succeeding years of normal rainfall; the result is an eventual partial die-off of voles. Meanwhile, during the period that the population of voles is larger, the population of predators—say, foxes—increases to take advantage of this expanded food source. But as voles start to disappear, the increased population of foxes can no longer be supported. Over time, the populations of voles and foxes can be described in terms of overshoot and die-off cycles, again tied to external factors like longer-term patterns of rainfall and temperature.
We humans killed off most of our macropredators a long time ago. Occasionally someone still gets munched by a crocodile, alligator, mountain lion, bear, or tiger, but that’s an exceedingly unusual way to go. Micropredators are a different story. Throughout human history, epidemics of infectious diseases were frequent and lethal. The Black Death of the Middle Ages temporarily reversed the growth trend of the global human population, and the influenza outbreak of 1918 killed between three to five percent of the total population at that time. As of this writing, the world is still attempting to cope with a coronavirus pandemic. And public health professionals are concerned that today we could be creating conditions for even worse pandemics in the future.7 (See sidebar “Rising Risk of Disease and Pandemic” in Chapter 5.)
The principles of population ecology apply as much to people as to other organisms: we have managed to increase Earth’s carrying capacity for humans through agriculture and the application of fossil fuels to food production (via tractors and nitrogen fertilizers) and transportation (moving resources from where they are abundant to where they are scarce, so that more people can be supported overall). Today we are evidently in a condition of overshoot, in view of the fact that fossil fuels are finite resources and will be difficult to substitute, and also the fact that we are depleting topsoil, fresh water, and other essential natural substances.8 If we are indeed in an overshoot phase, we must do what we can to avert or minimize a die-off event.
The study of predator-prey cycles and other population dynamics has led ecologists, including path-breaking resilience theorist Buzz Holling, to develop the concept of the adaptive cycle. The cycle encompasses four phases: exploitation, conservation, release, and reorganization.9
Figure 6.2 The adaptive cycle.
Credit: Based on the adaptive cycle as developed by Buzz Holling and others. See “Adaptive Cycle,” Resilience Alliance, www.resalliance.org/adaptive-cycle.
Imagine, for example, a Ponderosa pine forest. Following a disturbance such as a fire (in which stored carbon is released into the environment), hardy and adaptable “pioneer” species of low-growing plants and small animals fill in open niches and reproduce rapidly. This reorganization phase of the cycle soon transitions to an exploitation phase, in which slower-growing species that can most effectively exploit available resources over the long term start to dominate. This transition makes the system more stable, but at the expense of diversity. During the conservation phase, resources like nutrients, water, and sunlight are so taken up by the dominant species that the system as a whole eventually loses its flexibility to deal with changing conditions. These trends lead to a point where the system is susceptible to a crash—another release phase. This may come in the form of a wildfire in which many trees die, dispersing their nutrients, opening the forest canopy to let more light in, and providing habitat for shrubs and small animals. The cycle starts over.
In effect, the adaptive cycle is a description of the process whereby communities of organisms collide with limits and rebound. It’s only natural to want to apply this template to human communities as well, and doing so yields some useful insights.
There have been thousands of human cultures, defined by unique languages and sets of customs, but only 24 or so complex societies (or civilizations), defined by the presence of cities, writing, and full-time division of labor.10 In their early days, complex societies are populated with generalist pioneers (people who do lots of things reasonably well) living in an environment with abundant resources ready to be exploited. These people develop tools to enable them to exploit their resources more effectively. Division of labor and trade with distant regions also aids in more thorough resource exploitation. Trading and administrative centers appear and expand. Money is increasingly used to facilitate trade, while debt enables a transfer of consumption from the future to the present. Specialists in violence, armed with improved weaponry, conquer surrounding peoples.
Complexity (more kinds of tools, more social classes, more specialization) solves problems and enables the accumulation of wealth, leading to a conservation phase during which an empire may be built and great achievements made in the arts and sciences. However, as time goes on, the costs of complexity accumulate and the resilience of the society declines. Tax burdens become unbearable, natural resources become depleted, environments become polluted, and conquered peoples become restless. Aspirants to the elite classes become more numerous and begin to compete more strenuously with one another for a limited number of official positions within institutions. At its height, each civilization appears stable and invincible. Yet it is just at this moment of triumph that it is most vulnerable to external enemies and internal discord. Debt can no longer be repaid. Conquered people revolt. A natural disaster may then break open the façade of stability and control.
Collapse often comes relatively swiftly, leaving ruin in its wake. But at least some of the components that made the civilization great (including tools and elements of practical knowledge) persist, and the natural environment has an opportunity to regenerate and recover, eventually enabling reorganization and a new exploitation phase—that is, the rise of yet another civilization.
As noted in Chapter 4, Peter Turchin and his colleagues have analyzed quantifiable data from hundreds of agrarian societies (most were nations within a larger civilization, as France and Norway are nations within the European Union and today’s global industrial civilization). Turchin found clear evidence of cyclical behavior, with an average periodicity of about 300 years. Societies grew in wealth, geographic extent, and other measures of power, then shrank and simplified, at least temporarily and to some extent.11 Sometimes the retrenchment was so catastrophic that it warrants the term collapse. No complex society has ever just gotten bigger and bigger without eventually experiencing crisis and contraction. Thus contraction, even collapse, should be considered normal features of societal evolution.
In summary, death, extinction, and societal shrinkage or collapse are inevitable phases of individual and collective existence. But these are usually imposed by circumstance; except in the case of suicide (which is almost uniquely human, and is tied to our awareness of our mortality), they’re not deliberately chosen. Learning more about them helps us only so much in thinking about ways humanity could voluntarily reduce its own power in order to avert the crises discussed in Chapter 5. However, observation of nature suggests that self-limitation is also possible.
Self-Limitation in Natural and Human-Engineered Systems
At first thought, human beings might seem to be the only organisms capable of the kinds of conscious calculation, judgment, and planning that would be required for deliberate self-limitation of power. But other creatures engage in self-limiting behavior as well.
Power regulation occurs first of all within organisms via homeostasis. Organisms exhibit dynamic equilibrium, in which temporary imbalances in internal temperature, fluids, or pH trigger rebalancing. For example, in humans, when internal or external temperature exceeds a pre-set limit, we perspire, thereby cooling our skin until its temperature is back within an acceptable range. Each vital variable is controlled by one or more homeostatic mechanisms, which together maintain life.
Homeostasis is an example of what systems theorists call negative (or self-limiting) feedback, where the output of a system is fed back to the system in a way that tends to reduce fluctuations in key system parameters, whatever the cause. The cruise control system in a car, for example, can be set for a target speed. The car, in this example, is the system. One of its subsystems (the speedometer) produces output information about the car’s speed. That output is fed back as an input to the cruise control, which adjusts the accelerator, increasing or decreasing fuel flow to the engine so as to maintain the target speed even when the car is going uphill or downhill.
Much of our modern technology simply wouldn’t work without mechanical or electronic ways of mimicking homeostasis. The thermostats in our homes keep them from getting too hot or cold. The governor or speed limiter of an engine measures and regulates the engine speed (a classic example is the centrifugal fly-ball governor on a reciprocating steam engine, which uses the effect of inertial force on rotating weights driven by the machine output shaft to regulate its speed: when the shaft spins faster, the balls connected to it fly further apart, thereby closing a valve and limiting the fuel input into the engine). Cybernetics is a scientific field studying all such technological control systems. The principle, whether in biology or engineering, is the same: in order for a system’s operation to proceed in a sustained fashion, internal conditions, including power in various forms, have to be continually managed and controlled through balancing feedback.12 (Predator-prey relationships, discussed above, are an example of balancing feedback in the context of ecosystems.)
Sometimes the output of a system feeds back to the system in a way that continually amplifies an aspect of the system’s behavior. This is positive, or self-reinforcing feedback—which is a confusing term for most people when they first hear it in introductory courses on systems theory. We all appreciate what is colloquially termed “positive feedback,” in the forms of compliments and favorable job reviews. However, positive feedbacks in systems are almost always signs or causes of trouble. Self-reinforcing feedback occurs when the output of a system creates a cause-and-effect loop. For example, when a microphone and a loudspeaker are both connected to a PA system and the mic gets too close to the speaker, or the amplifier volume is turned too high, sound from the speaker is fed back through the mic and amplified continually, resulting in a loud squeal or screech. Jimi Hendrix was a master at controlling electric guitar feedback so as to create thrilling musical effects, but uncontrolled feedback is not so thrilling.
Many of the crises discussed in Chapter 5 can be understood in terms of the failure of balancing feedbacks, or the ramping up of self-reinforcing feedbacks. The global climate offers many instructive examples. Here is an instance of balancing feedback: long before humans started burning fossil fuels, large amounts of carbon dioxide were already venting into the atmosphere from the respiration and decomposition of organisms, wildfires, and volcanic eruptions. Carbon was (and is) absorbed by green plants, by carbonate rocks, and by the oceans. When CO2 levels increased steeply following major volcanic eruptions roughly 93 million years ago, the rebalancing process took a long time: green plants flourished, more carbonate rocks formed, and CO2 levels eventually settled back to their long-term normal range.
Today, as a result of very high and growing carbon emissions from fossil fuels, we seem to be setting off a series of self-reinforcing climate feedbacks. Higher air temperatures are melting the north polar ice cap, which exposes dark water. The exposed water absorbs more heat from sunlight than does white snow or ice, thus heating the atmosphere further, thereby melting more ice. Higher temperatures also melt permafrost soils in northern latitudes, causing the decomposition of buried organic matter as the permafrost thaws; this releases CO2, which creates higher atmospheric temperatures, which melt more permafrost.
In several respects, the Great Acceleration was itself an example of positive, self-reinforcing feedback. As humans gained access to more energy from fossil fuels, they increased their economic activity, which increased their demand for energy. More energy, plus artificial fertilizers, enabled more food production, which enabled more population growth, which again led to more energy demand, as well as more demand for food. Growth created the conditions for more growth, the same way a wildfire—at least in its initial stages—creates the conditions for its own spreading, as it heats and dries the vegetation around it and generates winds that broadcast embers.
The point of this discussion of systems, homeostasis, and feedback deserves to be underscored: self-limitation is inherent in all systems; they can’t function without it. Moreover, the failure of crucial self-limiting behaviors or mechanisms is always cause for concern and can be catastrophic.
Sidebar: The 2,000-Watt Society
A self-limiting organism or colony of organisms can restrict its own growth in many ways. A single organism may have a genetically determined maximum size, while a colony of organisms may release waste that is toxic to the colony once it exceeds a certain size. In the case of parasites, self-limitation of the colony’s size may be advantageous to continued survival: if the number of parasites becomes too high, they will kill the host, and hence themselves.
In some prey species, self-limitation of numbers keeps predators away by ensuring that there isn’t enough food available for them to bother with. This is a frequent survival strategy for rare species (a group of organisms that are very uncommon, scarce, or infrequently encountered). Ecologist Glenda Yenni, in her published work based on observations and modeling of desert plants and animals, argues that “strongly self-limiting rare species are common.”14 Theory predicts that rare species should quickly go extinct. But observation shows that they don’t; instead, they can persist for very long periods, and any given environment may include many species that are each represented by very few individuals. Yenni concludes that rare species “are rare because they are more self-limiting.”
… [S]elf-limitation occurs when a species is more negatively affected by other members of the same species than it is by members of other species. The stronger this self-limitation is, the more a species is negatively affected when its numbers get too high. While this can prevent these species from becoming abundant, it also means that a species with strong self-limitation is more positively affected when its numbers are very low, i.e., it can rebound quickly when its population becomes small.15
Some species limit themselves by specializing on a rare resource, whose scarcity limits the species’ reproduction and population growth. Others are limited by frequency-dependent predation, wherein a species may be very susceptible to a predator when its population grows, but undetectable to that same predator when it is very rare.16
Take, for example, the American pika (Ochotona princeps), a small relative of the rabbit. It specializes on a rare habitat type, talus fields in high alpine meadows. The pika’s resting body temperature is only about 3°C lower than lethal body temperature (due to a high metabolic rate and low thermal conductance), so it is confined to places where it can easily thermoregulate. It can survive without drinking liquid water, getting most of its water from the vegetation it eats. Pikas build up body fat all summer so as to make it through the winters, which they spend in their burrows. Though they are widespread in the American West, at each location they are relatively scarce compared to other montane mammals (marmots, squirrels, chipmunks, and rabbits) due to their choice to subsist in this extremely harsh environment.17 They have limited their population size by specializing so much, but they have gained relative stability.
Rare species that are self-limiting trade away the opportunity for momentary abundance, but they gain resilience against the likelihood of extinction, which explains why there are so many such species around to observe. “The conclusion using either abundance or energy use,” writes Yenni, “is that though strong self-limitation seems at first a counter-intuitive candidate to explain the persistence of rare species, it arises as a relatively prevalent pattern across many types of ecological communities.”
Abundant species, which more often are generalists, follow a different survival strategy. Rather than restricting their own population through their choice of food or habitat, they are more likely to be population-constrained by external factors such as predators and variations in food supply. Humans have increasingly taken this latter pathway. In doing so, we have also found ways to push against external limits, such as through the elimination of our predators (including disease organisms), and through intensified food production. This strategy has worked spectacularly well for us, in that we have been able dramatically and quickly to grow our population and consumption levels. But now those expanded external limits are starting to snap back, or threatening to do so. Perhaps our only way out is to learn or relearn voluntary self-limiting behaviors. Fortunately, we have some history to fall back upon.
Taboos, Souls, and Enlightenment
If other organisms have the capacity for self-limitation, humans certainly do, too. Indeed, self-limiting behavior is extolled and supported in the traditions of cultures around the globe.
In hunter-gatherer communities, as we’ve seen, authority was situational and nearly everything was shared. Bullies were eliminated through ostracism or capital punishment. There was little opportunity for the development of extreme inequality of any kind, and power relations were almost entirely horizontal rather than vertical. Children were taught to be humble and self-effacing so as to maintain solidarity within the group. Anthropologist Richard Lee, who studied the !Kung people of southern Africa, noted that when a hunter brought back a prized animal to share with the band, he always talked about how skinny and worthless it was. If he failed to do so, others would complain about the meat and make fun of him. When Lee asked about this, he was told: “When a young man kills much meat, he comes to think of himself as a big man, and he thinks of the rest of us as his inferiors. We can’t accept this. We refuse one who boasts, for someday his pride will make him kill somebody. So we always speak of his meat as worthless. In this way we cool his heart and make him gentle.”18
Taboos against overhunting were traditional methods of self-restraint and ecological stewardship. One example: the Bayaka of the Congo placed leaf cones on paths that led into parts of the forest where hunting had been unsuccessful, thus warning others to avoid it, and giving game populations time to recover.19 Such practices were widespread and varied. Tribal taboos regulating the harvest of vulnerable species took at least six forms, according to anthropologists Colding and Folke.20 These included “segment taboos,” which forbade individuals of a certain age, sex, or social class from harvesting a resource; “temporal taboos,” which banned the use of a subsistence resource during certain days, weeks, or seasons; “method taboos,” which restricted overly efficient harvesting techniques that might deplete the stock of a resource; “life-history taboos,” that forbade the harvesting of a species during vulnerable periods of its life history such as spawning or nesting; “specific-species taboos,” which protected a species at all times; and “habitat taboos,” which forbade human exploitation of species within particular reefs or forests that served as biological reserves or sanctuaries. Given the evidence that ancient peoples, as they migrated into new territories, often hunted abundant prey species to the point of extinction, it seems probable that Indigenous conservation practices were learned over a long time, through trial and error.21 As Clark Monson points out in his thorough review of the subject, Indigenous resource management is now being studied widely as a model for modern practice.22
In horticultural societies, social power took the form of prestige, but the Big Man was only able to gain this prestige through his generosity and encouragement of others; he lacked the ability to coerce anyone else in the group into doing anything. Periodic potlatch (give-away) feasts kept material inequality to a minimum and ensured that everyone in the group shared in whatever surplus was produced. While such traditions did not limit overall resource usage, they did keep economic inequality to a minimum.
As cooperation in ever-larger groups made vertical social power possible, checks on the accumulation of authoritarian power often fell away. Still, in early state societies, extreme inequality of wealth was somewhat blunted by periodic debt Jubilees, in which families were reunited and regained access to land. Also, kings always eventually died, and uprisings sometimes toppled governments.
Even in early state societies, the peasantry usually still had access to common resources of various kinds (as we saw in Chapter 5). Over the past couple of centuries, economists have debated whether communal management of land and other resources limits or encourages abuse and overuse. In an 1833 essay, British economist William Forster Lloyd used a hypothetical example of the dire effects of unregulated grazing on common land to argue that privatization of land leads to superior management. In 1968, in a widely discussed essay titled “The Tragedy of the Commons,” ecologist Garrett Hardin made the same point, suggesting that common resources such as the atmosphere and oceans are inherently prone to being polluted and overused by members of society who thereby personally gain, while leaving society as a whole to deal with negative impacts.23 However, more recently economist Elinor Ostrom has shown that, in most instances, Indigenous societies managed common resources responsibly. Basing her argument on her own field studies of pasture management in Africa and irrigation systems management in villages of Nepal, as well as numerous carefully designed experiments with test subjects, Ostrom found that societies frequently manage common resources successfully through mutual self-limitation.24
While early kingdoms exemplified vertical social power in the extreme, the pendulum of history was set to swing back toward mutualism and horizontal power. During the Axial Age, new Big God religions proclaimed the holiness of voluntary poverty and service to others (as discussed in Chapter 3). Even members of royal families were expected to at least pay lip service to these new ideals. In China, India, and the Mediterranean region, prophets, philosophers, and sages proclaimed the holiness of self-limitation. Here are just a few representative quotations from ancient texts (many more are collected in the anthology Less Is More, by Goldian Vandenbroeck, from which these are borrowed), giving a taste of the ethic common to the new religions and philosophies of the Axial Age:
Epicurus: “Poverty, brought into conformity with the law of Nature, is great wealth.”
Socrates (via Plato): “… I do nothing but go about persuading you, old and young alike, not to take thought for your person and your properties, but first and chiefly to care about the greatest improvement of the soul.”
Matthew 6:20: “Lay up for yourselves treasures in heaven, where neither moth nor dust doth corrupt, and where thieves do not break through nor steal.”
Matthew 6:28-29: “And why take ye thought for raiment? Consider the lilies of the field, how they grow; they toil not, neither do they spin: And yet I say unto you, that even Solomon in all his glory was not arrayed like one of these.”
Matthew 5:5: “Blessed are the meek, for they shall inherit the earth.”
Mohammed: “Poverty is my pride.”
I Ching: “Limitation must be carried out in the right way if it is to be effective. If we seek to impose restrictions on others only, while evading them ourselves, these restrictions will always be resented and will provoke resistance. If, however, a man in a leading position applies the limitation first to himself, demanding little from those associated with him, and with modest means manages to achieve something, good fortune is the result.”
Tao Te Ching: “He who knows he has enough is rich.”
Confucius: “The superior man understands what is right. The inferior man understands what will sell. The superior man loves his soul. The inferior man loves his property.”
Appollonius of Tyana (1st century, writing of his travels in India): “I saw Brahmans living upon the earth and yet not on it, and fortified without fortifications, and possessing nothing, yet having the riches of all men.”25
Axial Age religions focused on the idea of the soul—an inner essence of each individual which grows as a result of prayer, contemplation, and selfless good works, but atrophies when the person’s behavior is selfish. The worst behavior of all is that which has a tarnishing effect on the souls of others—as when a king leads his people astray morally. The idea of the soul was, and is, partly a means of denying the reality of death. But it has also served to encourage self-limiting, prosocial behavior at all levels of the social pyramid.
The uplifting or purification of the soul was likened to a journey whose ultimate objective was a transcendent state of being. In Eastern traditions, this goal was described as enlightenment; in Western traditions, as sainthood, or simply as wisdom. Enlightenment or saintliness was to be achieved by self-limiting behavior (voluntary poverty, fasting, silent contemplation or meditation, and withdrawal from worldly concerns), and good works on behalf of others—especially on behalf of the souls of others. By setting aside the pursuit of worldly power, one could develop an intangible inner power.
Meanwhile, the Big God was thought of as a Higher Power, capable of creating or destroying the universe and yet interested in the affairs of every person. Compared to this colossal potency, the power of any human individual, even an emperor, was vanishingly insignificant; yet the Higher Power was accessible to the lowest of the low via prayer and meditation. We should, it was believed, maintain an attitude of humility before this unseen Almighty, and make daily decisions on the basis of our considered assessment of the actions He/She/It would find most pleasing.26
The attitudes engendered by these new religions and philosophies worked their way throughout society, beginning with childrearing. Hunter-gatherer parents taught their children the values of sharing, thrift, and modesty. In the Axial Age, these ancient and universal values were revived and supercharged with the belief that the very souls of one’s children were at stake.
Secular social developments also increasingly underscored the righteousness of horizontal power and mutual self-limitation. Repeatedly throughout post-Axial Age history, vertical social power was checked by people acting together. Even though kings and emperors often still reigned, the notion of legal rights of citizens came to be discussed, disputed, and codified. In Greece, democracy emerged as an alternative to the rule of kings—though the opportunity to vote was limited to free, property-owning men. Centuries later, conflict between the English King John and a rebel group of barons led to the creation of the Magna Carta, a charter signed in 1215 protecting church rights, and guaranteeing the barons freedom from illegal imprisonment, access to swift justice, and limitations on feudal payments to the crown, to be implemented through a council of 25 barons. In subsequent centuries, that charter would serve as the template for constitutions guaranteeing citizen rights and circumscribing the powers of officials. As Russian scientist and anarchist philosopher Peter Kropotkin documented in his remarkable book Mutual Aid: A Factor of Evolution (1902), European free cities (which were self-ruling constitutional entities) and self-governing guilds of artisans exemplified horizontal social power throughout the medieval period. In many instances, secular checks on vertical power were fortified by religious or spiritual beliefs: kings and emperors were seen as unjustly usurping the authority of God, while popular movements for the “leveling” of society proclaimed the holiness of poverty, modesty, and care for others’ needs.
Throughout history, individuals have given up wealth and other forms of social power for ethical reasons. Gautama the Buddha is described in scriptures as having been a prince who renounced his hereditary advantages to work ceaselessly for the enlightenment and uplifting of all sentient beings. Mohandas Gandhi, a prosperous lawyer in South Africa, became the Mahatma (“Great Soul”) by giving away his worldly possessions, taking on a lifestyle of conspicuous voluntary poverty, and dedicating his life to freeing India from the yoke of British colonialism. Leo Tolstoy, a wealthy Russian count, became a Christian anarchist and pacifist, adopted peasant garb, and opposed private property. Today’s philanthropists such as Bill Gates and Warren Buffett dimly echo the sacrifices and achievements of such historical predecessors. It could, of course, be argued that those renunciates were simply trading one form of power for another—the power to compel for the power to inspire. Nevertheless, their example reminds us that the refusal of power is not only possible; it can change history.
Today it’s difficult, maybe even impossible, to imagine a politician proclaiming, “Vote for me and together we will reduce our wealth and power so as to tackle global existential issues like climate change and inequality.” Yet, for many centuries, Buddhist, Christian, and Muslim spiritual leaders have said to their followers, in effect, “Follow me and give up wealth and other forms of worldly power, and you will be happier.”27 And untold millions did just that.
Of course, not everyone responded to such appeals. Wars still raged, kings still sought the greater status of emperor, merchants still sought riches. Class structures, inequality, and ecological abuse persisted because elites generally don’t voluntarily relinquish their power and privilege (they tend to do the opposite). Agrarian cultures still tended to grow too big and complex for their inhabitants to understand the ecological damage they were doing. And in patriarchal societies lacking awareness of population problems and benign forms of birth control, familial reproductive pressures to have children still overwhelmed any concern about long-range, collective demographic consequences.
Nevertheless, a precedent had been established. To assume that people simply won’t voluntarily sacrifice power in order to serve the common good, even on a large scale, is simply incorrect.
Taxes, Regulations, Activism, and Rationing: Power Restraint in the Modern World
As we saw in Chapter 4, fossil fuels have greatly increased the size and complexity of modern societies. While the kingdoms and empires of the past had to balance the powers of the royalty and aristocracy with those of merchants, the church, and the military, today’s industrial nations feature additional power centers, including corporations, banks and other financial institutions, political parties, various communication media, arms manufacturers, unions, and nonprofit advocacy groups. In addition, global power is contested by nations and alliances of nations, using trade, espionage, and propaganda as weapons even when no shooting war exists. Contests for power have become so complicated that a thorough analysis would require hundreds of pages of text. However, our purpose here will be simply to explore how physical power and social power are restrained in the modern world.
As we saw in Chapter 5, the impacts from having accumulated too much power are now legion—including climate change, the proliferation of highly lethal weapons (notably nuclear weapons), pollution, habitat destruction, propaganda, extreme inequality, population growth, and resource depletion. We customarily limit these powers or impacts of power through regulation and treaty. Social movements counter power from above with power from below via popular organizing, nonviolent protest, and even revolution. The greater the concentration of power, the greater the variety and intensity of efforts needed to rein it in—and the greater the likelihood that, in some instances at least, those efforts will themselves become corrupted by abuses of power.
Laws and constitutions have evolved to limit the dangerous accumulation of social power even in the absence of religious commandments and prohibitions. The goal of architects of governmental institutions, beginning with Plato, has been a kind of social homeostasis in which checks and balances prevent power from accumulating dangerously in any one sector of society through a self-reinforcing feedback process. The most decisive element of this social homeostasis is representative democracy, which, over the past two centuries, has taken root in roughly half of the world’s nations. In a constitutional democracy, institutions of government are intended to provide balancing feedback to one another—as when (in the US) Congress investigates and impeaches a President, or courts strike down laws passed by Congress because they violate the Constitution.
The most radical stance in power-sharing political theory was adopted by anarchist philosophers like Mikhail Bakunin (1814-1876). They argued that the state and all forms of hierarchy are inherently evil, and that authority should flow instead from individuals’ talents and labor, with all productive property and factory machinery owned in common by the people. Bakunin differed from Karl Marx (1818-1883), who advocated that the transformation of society to the ideal of perfect distribution (“from each according to their abilities, to each according to their needs”) would require an intermediate stage—a dictatorship of the proletariat (Bakunin thought this was both an ideological and tactical mistake).28 In the US, Eugene Debs (1855-1826), who ran for president five times and was imprisoned for his labor organizing efforts, preached democratic socialism—the notion that elected government should own all industries and divide profits among the workers. During the 20th century, socialist and communist ideas were put into practice to varying degrees in many nations, while anarchism helped inspire thousands of cooperatives and labor unions. In most industrial democracies, the results included better access to health care and other amenities, better working conditions, and a reduction in economic inequality. Unfortunately, the Soviet experiment with Marxism seemed merely to shift authoritarian power from one sector of society (the capitalists) to another (the Party and its functionaries), rather than altogether doing away with vertical social power, which was its ostensible goal.
Prior to the Great Acceleration, religious institutions managed wealth inequality by demanding tithes from the royalty, aristocracy, and merchants, and distributing alms to the poor. Today, however, inequality is managed to a greater degree through graduated taxation and various government redistributive programs (though religious charities still persist). In Britain, Prime Minister William Pitt the Younger introduced the first modern income tax in 1798 to pay for his nation’s involvement in the French Revolutionary War. Pitt’s tax was graduated or progressive, in that low earners paid less: tax rates ranged from less than one percent up to ten percent of income. In the United States, the first progressive income tax was established by President Lincoln in 1862, but repealed in 1872. The Sixteenth Amendment to the US Constitution, adopted in 1913, empowered Congress to begin collecting income taxes to fund the government, and, by the mid-20th century, most other countries had likewise implemented some form of graduated income tax, serving both to fund governments (and their various programs) and to reduce income inequality. In the US, tax rates on the wealthy peaked in the 1950s, when earners in the highest tax bracket were taxed 91 percent of their income. Taxes on capital gains (which apply only to investors) were introduced in the early 20th century. In recent decades, US tax rates on the wealthy have fallen sharply, especially since the Reagan administration, partly as a result of political lobbying by the rich, who have also found a multitude of ways of evading taxation. Unsurprisingly, levels of wealth inequality have rebounded upward as a result.29
Government programs aimed at reducing economic inequality have come to include transfer payments (welfare, financial aid, and Social Security) and social safety nets (unemployment benefits, government-run or subsidized healthcare systems, free education, rights to housing, legal aid, funds for pensioners and veterans, consumer protections, and subsidized services such as public transport). Some nations have more robust public spending programs than others: Europe and Central Asia currently spend the most, averaging 2.2 percent of GDP; the Middle East, North Africa, and South Asia spend the least, at about 1.0 percent. Unfortunately, nations with generous social programs coincidentally tend to have high per-capita greenhouse gas emissions.30 In the US, government redistributive programs have become the subject of much political controversy, with right-leaning politicians seeking to reduce or eliminate programs, and their left-leaning colleagues proposing to expand existing programs or create new ones, such as “Medicare for all” or a guaranteed basic income.
Walter Scheidler, in his book The Great Leveler, argues that these deliberate efforts to manage inequality have been unusual in historical terms, and only partly effectual. He documents, as we saw in Chapter 4, that most of the economic leveling that occurred in the mid-20th century resulted directly or indirectly from the two World Wars (his larger point, as we noted then, is that increased economic equality has usually come about as a result of the Four Horsemen of mass mobilization warfare, transformative revolution, state failure, and lethal pandemics). Timothy Mitchell, in Carbon Democracy, further argues that violent labor disputes during the coal era did much more to promote economic equality than the peaceable tinkering with economic policy that occurred in the post-WWII Oil Age. Nevertheless, as both authors acknowledge, government policy (however arrived at) does impact equity, and so there is every reason for citizens to demand more progressive taxation, higher inheritance taxes, financial transactions taxes, and other economic policies that would make for more equitable distribution of income and wealth.
In the end, all such leveling efforts and influences must push against the inherent tendency of the structures of industrial society (involving energy production, manufacturing, distribution, information flow, and investment) to concentrate power in the hands of various elites—including politicians, financiers, corporate managers, and media gatekeepers. The very nature of complex societies ensures that such power concentrations will emerge; the question is only how, and to what degree, those concentrations will be contested or limited.
The limiting of coercive social power in the modern world is perhaps epitomized in the abolition of slavery. In Chapter 4, we saw how industrial machinery, powered at first by flowing water and increasingly by coal, undermined the institution of slavery in the United States. However, it would be wrong to ignore the role of abolitionists and slaves themselves in that process. Slaves revolted, escaped, and helped others escape; and some former slaves, such as Frederick Douglass, wrote and spoke tirelessly about the horrors of the institution. Abolitionists wrote tracts and pamphlets, ran for office, organized public lectures and demonstrations, and deliberately and often publicly broke laws, thereby risking their freedom and lives, in order to further their cause. They argued the immorality and cruelty of the trade in human persons, forced labor, the separation of families, and the denial of basic human dignity. In doing so, abolitionists set a tactical template for nearly all subsequent human rights and environmental advocacy campaigns.31 Not only did the struggles to abolish the Atlantic slave trade and the practice of slave ownership in the US and other nations succeed, but similar efforts resulted in the banning of child labor and unsafe working conditions, and the creation of regulations instituting the eight-hour workday and the minimum wage.
Internationally, activism partly inspired by the abolitionists was directed toward the ending of colonialism. In countries in Africa, South and Central America, South Asia, East Asia, the Pacific Islands, and the Caribbean, anti-colonial uprisings, boycotts, demonstrations, and wars continued through the mid-20th century. The power of colonizing nations to directly and brutally commandeer labor and resources from other peoples was eventually mostly terminated (though subtler means of exploitation continue to this day). In India, the anti-colonial struggle was led by Mahatma Gandhi, whose theory and method of nonviolent resistance would be studied and emulated by human rights, antiwar, and environmental protection campaigners everywhere.
Also drawing upon the moral impetus and the successful tactics of the slavery abolition movement, the suffragists of the 19th and early 20th century sought to extend full civil rights—beginning with the right to vote—to women. Some polities (including Sweden and the Dutch province of Friesland) had permitted women to vote as early as the 17th and 18th centuries, but typically only in certain districts of if women owned land. New Zealand gave all female citizens the power of voting rights in 1893. Australia’s states began granting women the right to vote (though not to run for office) in the 1890s; by 1902 a national law granted women (except “aboriginal natives” of Australia, Africa, Asia and the Pacific Islands) not only voting rights but the right to run for federal Parliament. Women in Britain gained voting rights via two laws, in 1918 and 1928; while in the US, women’s right to vote was codified in the 19th Amendment to the Constitution, which became law in 1920.
Majorities often have the power to oppress or marginalize minorities. This power, piggybacking on the religious Big God impulse to promote fertility and population growth, has historically led to prohibitions against same-sex sexual behavior. Countering this, activism by gay rights advocates has led to the decriminalization of homosexuality, which has occurred piecemeal in nations and US states over the past half-century (gay people are still legally persecuted in many African and Middle Eastern countries, and Russia maintains laws restricting freedom of expression and association for LGBTQ people). The struggle for gay marriage and other equal rights for people of all sexual orientations is ongoing.
The environmental movement began in the late 19th century with efforts to protect public lands from exploitation; it has since taken on a widening array of issues, including the protection of threatened species, the ending of various forms of pollution, curbing the growth of human population, and the halting of climate change. Tactics borrowed from the abolition movement and from Gandhi’s nonviolent anti-colonial campaign have led to a long series of victories, including (in the US) the establishment of the Environmental Protection Agency, and the passing of the Clean Air Act, Clean Water Act, and Endangered Species Act.
However, as Chapter 5 hopefully made clear, these measures have been insufficient to halt climate change and many other snowballing environmental problems. This perceived failure has led some members of the environmental movement to question its tactics. Rather than warning of impending crisis and calling for sacrifice (by reducing consumption and birthrates), as many early environmentalists did, some self-described “bright greens” now argue that only good news, and promises of more economic growth and jobs from clean technology industries, can turn the tide.32 However, others argue that most of the failures of the environmental movement did not stem from some flaw in the essential message of first-generation environmentalists, who were correct in saying that humanity will have to rein in its powers of production, consumption, and reproduction if it is to avert ecological ruin. Instead, they would say, the larger failure of environmentalism can mostly be chalked up to psychological denial among the general populace and the momentum of economic growth (as we’ll discuss later in this chapter). It’s true that positive and encouraging messages are helpful, but they need to take the form of believable stories about how we can all thrive together by using less, and in the absence of growth.33 Meanwhile, worsening news about climate change impacts and species declines has ignited a new phase of environmental radicalism epitomized by Extinction Rebellion, a global movement with the stated aim of using nonviolent civil disobedience to compel government action to avoid tipping points in the climate system, biodiversity loss, and the risk of social and ecological collapse (more on that in the next chapter).34
The specter of annihilation from nuclear war led to calls to “ban the bomb” starting in 1957, with the creation of the Campaign for Nuclear Disarmament in Britain. Meanwhile, as the civilian nuclear power industry grew, fears of risk of accidents and concern over the links between nuclear power and the production of fissile material for nuclear weapons led to environmentalists’ engagement with the nuclear issue.35 New nuclear power plants were discouraged through massive ongoing protests (for example, in Sonoma County, California), and protests also contributed to the closure of power plants and a related weapons facility in Hanford, Washington.
In normal times, market economies ration goods by price, so that the people with the most money can consume the most of any good they choose. However, during wartime, in the case of extreme scarcity of essential goods, or when consumption of a good needs to be controlled for some other reason, modern governments have instituted quota rationing, a collective form of self-limitation. Britain and Germany instituted rationing during World War I and again in World War II as a way to meet the basic needs of ordinary citizens while directing large quantities of resources to the military. Overall economic inequality declined as a result. During the Second World War, the United States likewise issued ration coupons—for fuel, food, clothing, and tires, among other things. In his book Any Way You Slice It, author Stan Cox recounts how Americans willingly, even enthusiastically participated in the program; one woman was quoted at the time as saying that “rationing is good democracy.”36 During the first three years of the British rationing program, “overall consumer spending dropped 15 percent and shifted sharply toward less resource-intensive goods,” according to Cox.37 Britain continued its rationing program well after the end of the war, and surveys showed that, during the period of rationing, Britons were generally better fed and healthier than either before or after.
Food rationing began long ago, in ancient Mesopotamia and Egypt, and subsidized food rationing is still commonly practiced—though less often as a way to conserve scarce commodities than as a way to ensure that people with low incomes have access to essential nourishment. In the US, the Supplemental Nutritional Assistance Program (SNAP, formerly known as the Food Stamp Program) served 40 million Americans in 2018, roughly nine percent of the population. Food rationing programs have been implemented in the past, or are currently in effect, in nations as diverse as Argentina, Bangladesh, Brazil, Chile, China, Colombia, Cuba, Egypt, India, Iran, Iraq, Israel, Mexico, Morocco, Pakistan, the Philippines, the Soviet Union, Sri Lanka, Sudan, Thailand, Venezuela, and Zambia. Cox points out that, in the future, societies may face increasing scarcity of water, food, and energy, and may need to find ways to fairly reduce carbon emissions; rationing could play a role in each instance. If economies need to be deliberately shrunk in order to reduce energy usage, quota rationing could provide a means of degrowing them fairly, and with minimal pain and sacrifice.
Surveys suggest that high levels of willing participation in rationing programs during wartime depended on three primary factors: a shared sense of immediate crisis; a common belief that the crisis would pass, so that rationing would be only a temporary inconvenience; and a sense that sacrifices are being shared fairly. It may be a challenge to design future rationing programs in the context of scarcity that is ongoing, and crises that are difficult for many people to understand. Nevertheless, precedent shows that there are effective alternatives to price rationing when markets fail.
The homeostatic mechanisms of modern societies, which take the form of mutually self-limiting institutions whose purpose is to solve human problems (laws, police, courts, and governmental redistributive programs), have somewhat reduced the requirement for Big Gods as motivators of pro-social behavior. Religious affiliation has declined in industrial democracies, and especially in those that provide robust social services—notably the Scandinavian countries. Denmark, for example, is a majority atheist nation, yet manages to remain highly cooperative and peaceful. Ara Norenzayan suggests this is because effective big government can be a replacement for religion, and vice versa.38 The chain of causation is unclear. Is it that, when people believe that government will take care of them and punish anti-social cheaters, their need for a Big God recedes, or does a decline in religious faith lead people to vest more confidence in the problem-solving ability of government? Either way, if and when big governments lose their ability to solve problems, people may yearn for older forms of social control.
The good news is that, in the modern world, we’ve gotten somewhat better than we used to be at managing social power (though reversals are common). Social evolution has proceeded ever more rapidly, creating institutions and strategies for rebalancing when trends get out of hand—when a leader grows too arrogant, when the wealth disparity between rich and poor becomes unbearable, or when the daily operations of society threaten the integrity of natural systems.
The bad news is that our expanding power-management strategies haven’t always kept up with our even more rapid development of powers of fossil-fueled population growth, resource extraction, and industrial production. Systems put in place to prevent social and environmental harm are being overwhelmed (we’ll explore some of the reasons for this in a moment). And despite social services and progressive taxation, wealth has become more concentrated in fewer hands.
Further, many of the new rights and freedoms we’ve gained in the last couple of centuries arose in the context of a society of abundance. What happens if and when the current period of abundance shifts to one of scarcity? Will those freedoms and rights persist? Or will we turn back toward starker forms of vertical social power that prevailed in earlier eras? Ominously, as the fossil-fuel era grinds to a close, we are seeing a trend toward the rejection of liberalism and democracy in at least some nations. This is a subject to which we will return in Chapter 7.
Games, Disarmament, and Degrowth
Before addressing the question of why sophisticated modern power-limiting efforts have been insufficient, it’s worth briefly exploring mathematicians’ contribution to the discussion of mutual power self-limitation. In the last few decades, game theory—typically defined as the study of mathematical models of strategic interaction among rational decision-makers—has become integral to economics, philosophy, international relations, business, and evolutionary biology.39 While many early game theorists chose to address the real-world problem of nuclear disarmament, their findings apply to any situation in which power holders must negotiate a stand-down, including international climate negotiations. Game theory has also been used to explain the evolution of cooperation within and among species. The relevant question that game theory addresses is: If I give up some of my power, how can I be sure that you or someone else will not take up that power and use it against me?
The game that has been studied most extensively is the prisoner’s dilemma, which presents a situation where two parties, separated and unable to communicate, must each choose between cooperating and competing. The most desirable outcome for both will be realized if they cooperate, but there is no way for either to know at the outset that this is the case. Here’s a simple, frequently cited example of the game: suppose two members of a gang of bank robbers, Bill and Joe, have been arrested and are being interrogated in separate rooms. The authorities have no other witnesses, and can only prove the case against them if at least one betrays his accomplice and testifies to the crime. Each must choose to cooperate with his accomplice and remain silent, or defect and testify. If each cooperates with the other and remains silent, then the authorities will only be able to convict them on a lesser charge, which will mean one year in jail each (1 year for Bill + 1 year for Joe = 2 years total jail time). If one testifies and the other does not, then the one who testifies will go free and the other will get three years (0 years for the one who defects + 3 for the one convicted = 3 years total). However, if each testifies against the other, each will get two years in jail for being partly responsible for the robbery (2 years for Bill + 2 years for Joe = 4 years total jail time).
Each robber always has an incentive to defect, regardless of the other’s choice. From Bill’s perspective, if Joe remains silent, then Bill can either cooperate with Joe and do a year in jail, or defect and go free. He would be better off betraying his comrade. On the other hand, if Joe defects and testifies against Bill, then Bill’s choice becomes either to remain silent and do three years or to talk and do two years in jail. Again, he would be better off testifying, thus getting two years instead of three.
The paradox of the prisoner’s dilemma is essentially this: both robbers can minimize the total jail time that the two of them will face only if they both cooperate with one another (in which case they get two years total), but each separately faces incentives that drive him to defect; and if they both defect, they will end up doing the maximum total jail time (four years).
Solutions to the prisoner’s dilemma are hard to find if the situation is narrowly defined; but real-world prisoner’s dilemmas are typically more complex and offer many opportunities for solution. The tragedy of the commons, mentioned earlier, is an example of a prisoner’s dilemma: if each shepherd grazing her herd on a common green chooses to maximize the size of her herd, the green will be overgrazed and all the sheep will starve. Yet each shepherd is individually incentivized to do just that. Nevertheless, in reality, shepherds have faced this dilemma for several thousand years in innumerable locations, and have found ways to cooperate to the benefit of all.
A prisoner’s dilemma game is often played only once; but, in the real world, many types of competitive and cooperative human interaction play out over a long period and are repeated many times. As people see the results of their choices, they recalibrate and find ways to reward cooperation or punish defection among fellow players. They can create formal rules and institutions that alter the incentives faced by individual decision makers. There are also informal incentives to cooperate, such as the desire to have a favorable reputation among one’s peers, as well as disincentives such as social opprobrium.
Given time, groups of people tend to develop psychological and behavioral biases toward increased trust in one another, long-term future orientation, and inclination toward reinforcement of cooperative behavior and discouragement of anti-cooperative behavior. While these biases can be temporarily reversed during periods of intense social conflict, they otherwise tend to evolve through a selection process within a society, or by group selection across different competing societies. Over all, they lead individuals often to engage in personally “irrational” choices that lead to the most beneficial outcome for the group as a whole.
Disarmament is frequently described as a prisoner’s dilemma: if one nation gives up military assets, it may leave itself vulnerable to attack by another country that hasn’t disarmed. Nevertheless, many arms treaties have been negotiated during the past century, limiting certain kinds of weapons (such as nuclear warheads) or whole classes of weapons (such as biological and chemical weapons). Typically, success is achieved through a sequence of carefully designed and monitored stages, so that no country is unacceptably exposed at any one stage.40
One nation, Costa Rica, has unilaterally disarmed completely: it decommissioned its armed forces in 1948. Similarly, South Africa unilaterally gave up its nuclear weapons in 1989 in an effort to create greater regional stability and to foster respect within the international community.
In 1979, game theorists began using computer programs to run the prisoner’s dilemma and similar games in tournaments; the winner was often a simple “tit-for-tat” program that cooperates on the first step, then, on subsequent steps, does whatever its opponent did on the previous step. Such programs, with only slight modification, have also been used to model natural selection and the evolution of cooperative behavior.
Game theory has obvious implications for society’s ability to reduce greenhouse gas emissions and halt climate change. Peter John Wood of Australian National University has published an overview of research along these lines.41 The best solutions, he finds, involve the use of carrots and sticks—rewards for compliance and punishments for non-compliance—as well as linkage with other issues, such as trade, so that nations will find it in their best interest to participate.
As we saw in Chapter 5, global problems such as climate change, pollution, resource depletion, and species extinctions can probably ultimately be addressed only by shrinking the total human enterprise. But degrowth presents yet another prisoner’s dilemma: if one politician or political party proposes to shrink the economy and reduce population over time, thus calling for collective belt-tightening and sacrifice, another politician or party is likely to respond by saying that there is no need for such effort: just vote for me and we can all enjoy more growth and prosperity! Similarly, if one nation degrows its economy but others grow so that total production and consumption remain unchanged or even increase, nothing has been gained. Ecological economists have been puzzling over this particular prisoner’s dilemma for many years; the best solution they’ve found so far is to call for increased societal focus on happiness and well-being: perhaps we can increase the social factors that create feelings of life satisfaction while we engage in the otherwise contentious work of reducing population and consumption.42 I’ll return to this point in the next chapter.
Unfortunately, game theory tends to gloss over the historically constituted, real-world power relations between participants and assumes that people (and nations) are entirely rational. Even within this idealized theoretical framework, the more actors that are involved and the fewer the mechanisms for monitoring and sanctioning free riders, the lower the chances of cooperation. On the whole, our current international system, driven by profit and power, is not altogether conducive to building cooperation and discouraging free-riding.43
Nevertheless, life is full of prisoner’s dilemmas. And, in principle, we are perfectly capable of playing these games so that everyone wins. It all comes down to trust—the basis of social capital and horizontal power. Trust isn’t always rational, and time and effort are required to build it. In difficult times, trust is far more valuable than the material wealth to which we often aspire or cling.
Denial, Optimism Bias, and Irrational Exuberance
In this chapter we have seen that the ability to limit power is rooted in biology and has a long history in all human societies; further, engineers routinely find ways to incorporate power-limiting mechanisms in technologies of all sorts. I have proposed the optimum power principle—the observation that organisms often curb their power in the present as a way to maximize power over the long run—as an addendum to the maximum power principle. Even prisoner’s dilemmas have solutions. If the existential crises we surveyed in Chapter 5 are indeed the result of too much power, there is no reason (in principle, at least) why humanity cannot power down to solve them. So, given the capacity and innumerable opportunities for power moderation, why do we humans still appear to be running headlong toward a global catastrophe driven by power excesses and abuses?
The adaptive cycle tells us that temporary imbalances in nature and human societies are natural and inevitable; over time, rebalancing occurs. However, those temporary imbalances are sometimes particularly large; that is, the growth phase of the cycle can sometimes swing to extremes. Humanity’s access to fossil fuels has propelled us on growth and conservation phases that are utterly unprecedented in terms of the levels of power and population size that we have achieved. With so many opportunities for growth, we’ve tended to set aside ancient cultural attitudes and practices promoting self-restraint, even if doing so blinds us to the overwhelming likelihood of a cyclical collapse/release phase of unprecedented magnitude, and makes it harder for us to do things that would reduce the scale of the impending calamity.
But there’s more. We are inherently subject to a set of collective and individual psychological mechanisms that make it easy for a large power imbalance to appear, but difficult for us to recognize and minimize that imbalance before serious problems occur. These mechanisms take the forms of denial, optimism bias, our tendency to lie to ourselves and others, the Overton Window, our genetically-based pursuit of status, our addiction to novelty, our tendency to discount the future, and the lottery winner’s syndrome. Let’s look briefly at each of these, and see how and why they all tend to keep us from limiting our power excesses.
The phrase “climate denial” may trigger thoughts of efforts by fossil fuel companies to sow public doubt about the reality of global warming. These efforts are certainly real, but they have been successful largely because denial itself is a deeply entrenched human capacity. In their book Denial: Self-Deception, False Beliefs, and the Origin of the Human Mind, Ajit Varki and Danny Brower suggest that the awareness of our own mortality (which arose along with the development of language sometime in the Pleistocene) might have stopped human evolution in its tracks. That is, our expectation of personal extinction would have made us so depressed and so cautious that we probably wouldn’t have been able to compete successfully with other species, or other members of our own species who were not so burdened, if not for the simultaneous appearance of a fortuitous adaptation—our ability to deny death and other unpleasant realities.44 As we became aware of the inevitability of our own death, we quickly learned to deny that awareness so we could get on with our day-to-day business. Denial thus served an evolutionary function as an essential tool of terror management. Over time, our denial muscle probably strengthened—and it has arguably done so especially in recent decades.
If individual mortality is terror-inducing, coming to terms emotionally with collective death is utterly beyond us. Scientists have been aware of species extinctions for the past couple of hundred years, since the beginnings of modern biology and paleontology.45 We have therefore also become aware of the possibility of our own species’ extinction. That awareness has become more acute during the past 70 years or so, since the start of the global nuclear arms race. But human extinction is a subject few people wish to consider, let alone bring up in polite conversation (although apocalyptic novels and movies are becoming increasingly popular). While we know that each of us will eventually die, we implicitly count on the persistence of future generations, and the survival of human culture, in order to maintain our psychological equilibrium. The thought that the entire human enterprise, supporting all our collective dreams and accomplishments, could disappear in a cloud of smoke or an endless stream of carbon emissions is unbearable. So, we psychically bury the prospect of human extinction, even as we go about creating the means for its occurrence.
Denial of climate change is therefore more than just a political tool for maintaining corporate profits (though this it certainly is). It is also a collective coping mechanism.
The most common form of denial—whether of death or climate change—is mental compartmenting. We create a mental compartment for death and another for climate change; but there are also compartments for favorite old movies, recipes, opinions about politics, and so on. We aren’t literally denying the reality of death or climate change; it’s right there, in its compartment. Every so often we look in that compartment and feel fearful. But the amount of time we spend looking in any particular compartment is typically proportional not to its actual importance, but to its ability to satisfy our interests and emotional needs. Sooner or later an event (perhaps a visit to the doctor’s office) forces us to look into the death compartment; but by then the damage from years of smoking or other unhealthful habits is already done. Much the same will likely be true with climate change.
The individual and cultural coping mechanism of denial has a flipside—an optimism bias that again leads us to believe that we are less likely to experience a negative event than we actually are. Neuroscientist Tali Sharot, in her book The Optimism Bias: A Tour of the Irrationally Positive Brain, cites surveys and experiments showing that the phenomenon is real and pervasive.46 Its mirror image, pessimism bias, affects people suffering from clinical depression, but is otherwise rare. This natural tendency toward optimism has served an evolutionary purpose—it encourages us to take risks in order to reap rewards. But it also steeply increases our vulnerability and hobbles our ability to respond in the era of climate change and other converging crises.
Sometimes collective optimism bias feeds back on itself, resulting in mania, bubbles, and booms. As Scottish journalist Charles MacKay put it in his still-relevant 1841 book Extraordinary Popular Delusions and the Madness of Crowds, “Men … think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, and one by one.” Federal Reserve Board chairman Alan Greenspan used the phrase “irrational exuberance” to describe the dot-com stock market bubble of the 1990s, but many bubbles preceded that one, and more bubbles have followed, including the fracking frenzy of the 2010s. Collective manias spread and intensify because no one wants to miss out on the “next big thing.” We may rationally know that the boom can’t last forever, but we don’t want to be the one who gets left behind. In a sense, the Great Acceleration can be thought of as the grandest popular delusion in the history of our species: from the outset it was obvious that fossil fuels would eventually deplete, but we treated them as though they would last forever.
Both denial and optimism bias depend on deception, including self-deception. Yuval Noah Harari, in his popular book Sapiens, makes the point that our development of language back in the Pleistocene gave us the capacity to create myths and useful fictions. Language enabled us to talk about things that don’t exist—which is an essential ability if you want to design a new machine from scratch or create a new company. But we have gotten so good at creating fictional entities that the real world has become easy to deny and ignore. As Harari puts it:
Ever since the Cognitive Revolution, Sapiens have thus been living in a dual reality. On the one hand, the objective reality of rivers, trees and lions; and on the other hand, the imagined reality of gods, nations and corporations. As time went by, the imagined reality became ever more powerful, so that today the very survival of rivers, trees and lions depends on the grace of imagined entities such as the United States and Google.47
Collective denial and optimism bias make it difficult to talk to friends and relatives about the looming climate crisis, resource depletion, and other trends that imperil our future. We are sensitive to one another’s subtle cues, and change the subject when it’s clear that the discussion has touched a nerve. The same is true on a national level: there are some things we just don’t want to talk about. The range of acceptable public discourse is called the Overton Window, named for Joseph P. Overton, who stated that an idea’s viability depends mainly on whether it falls within this range of acceptability, not on its inherent truth or usefulness. According to Overton, this window frames the range of policies that a politician can recommend, or ideas she can talk about, without appearing too extreme to gain or keep public office.
As a result of decades of sustained collective effort, the scientific community has brought climate change within the Overton Window—at least some of the time, and in most nations. However, one important and necessary response to climate change is still well beyond the window—a deliberate policy of degrowth. As discussed in Chapter 5, continued annual growth in population and consumption makes it ever harder for the world’s nations to reduce greenhouse gas emissions in order to minimize climate change. An obvious solution would be to reduce population and consumption. That would, of course, pose a challenge in a world that has come to see growth as essential to the economic health of nations. Nevertheless, the logical necessity of degrowth is inescapable, and a few economists have proposed ways of dealing with the difficulties it would pose.48 As a gauge of degrowth’s proximity to the window, consider this fact: the International Panel of Climate Change (IPCC) of the United Nations periodically produces hundreds of models to show how government policies of various kinds would impact emissions and global warming. But degrowth has never been included among those policies.
Incidentally, the truth-telling ability of Greta Thunberg, the young Swedish climate activist who was Time magazine’s Person of the Year in 2019, is (in her view, and that of some psychologists as well) partly due to her autism—which makes her less aware of social cues and hence less prone to hypocrisy. If no one but Thunberg has had the courage to tell world leaders to their faces, “We are in the beginning of mass extinction, and all you can talk about is money and fairy tales of eternal economic growth,” that’s largely because people with autism are less aware of the Overton Window.49
Another psychological mechanism making it difficult for us to rein in our powers so as to avert global crisis is the pursuit of status. Throughout the evolution of complex organisms, notably vertebrates, status has served as a way of minimizing the costs of competition. Animals compete for mates and food, but competition carries costs. Signals of status within a species establish which individuals are more or less likely to be successfully challenged, so overall there is less energy wasted in competition. Tendencies among modern humans to acquire status symbols—expensive cars, clothes, and houses—are therefore deeply rooted in evolution.50 If we’re told that big, powerful automobiles and jet vacations are sealing the fate of future generations, that message has to overcome the lure of status in order to get our attention and change our behavior.
We humans are also wired to respond to novelty—to notice anything in our environment that is out of place or unexpected and that might signal a potential threat or reward. Most types of reward increase the level of the neurotransmitter dopamine within the brain. Experiments have found that if an animal’s dopamine receptor genes are removed, it explores less and take fewer risks—and without some exploration and risk taking, individuals have reduced chances of survival. But the brain’s dopamine reward system, which evolved to serve this practical function, can be hijacked by addictive substances and behaviors. This is especially problematic in a culture full of novel stimuli specifically designed to attract our interest—such as the hundreds of advertising messages the average child sees each day.
Addictions to shopping or to acquiring status symbols are hard to overcome because they are reinforced by our innate brain chemistry. They can be as hard to defeat as a drug dependency. If our environment is filled with potential dopamine reward system hijackers (which it is, primarily due to cheap energy and profit-maximizing consumer capitalism, magnified by the reward systems built into social media), then it stands to reason that more of us are likely to end up spending much of our lives chasing after momentary feel-good experiences that soon turn sour. That’s why our society is overwhelmed with high levels of drug, gambling, sugar, television, social media, and pornography addiction.
As we’ve seen in this chapter, human societies have learned to tame biologically rooted reward-seeking behavior with culturally learned behaviors geared toward self-restraint and compassion for others. Prudence, thrift, and the willingness to sacrifice on behalf of the community are functions of the neocortex—the part of the brain unique to mammals; and even though they are rooted in evolutionary imperatives, they are also at least partly learned by way of example. Most pre-industrial human societies expended a great deal of effort to provide moral guidance, often through myths and stories, to foster pro-social behavior. When a culture ceases providing this needed educational effort—either because self-restraint and empathy are no longer seen as important, or because the society is so overcome with basic survival challenges that it simply doesn’t have the resources to devote toward educating the next generation—then these values can become seriously eroded.
Consumerism, the economic system that was invented to solve the problem of overproduction, hijacked our brains’ reward pathways for status and novelty, and it has also deliberately eroded our learned social adaptations for restraint and compassion. It reduced the perceived social value of thrift and sacrifice on behalf of community in order to promote the ideal of individual gratification through consumption. Again, this system was put in place with what industrial and governmental leaders regarded as the best of intentions—that is, with the hope of expanding markets, creating jobs, maximizing profits, and increasing tax revenues so that governments could provide more services. But consumerism makes it harder for us to address converging global crises.
We also have an innate tendency, when making decisions, to give more weight to present threats and opportunities than to future ones. This is called discounting the future—and it makes it hard to sacrifice now to overcome an enormous future risk such as climate change. The immediate reward of vacationing in another country, for example, is likely to overwhelm our concern about the greenhouse gas footprint of our airline flight. Multiply that tendency by billions of individual decisions with climate repercussions, and supercharge it with a systemic drive to maximize each company’s quarterly profit margins, and you can see why it’s difficult to actually reduce total greenhouse gas emissions.
To make matters even worse, many of us in wealthy nations suffer from lottery winner’s syndrome.51 Sociological studies of lottery winners show that many actually experience a reduction in happiness and well-being: they’re overwhelmed by choice and excess, their relationships become discolored by jealousy and suspicion, and they often become more socially isolated and feel less empathy toward others. Some end up gambling their money away, divorcing, or turning to drugs or alcohol. In a sense, the people who benefitted most from the fossil-fueled Great Acceleration (i.e., middle and upper class citizens) are like lottery winners: they have collectively experienced a vast and rapid increase in wealth. They have been encouraged to think that they must somehow deserve this level of wealth, and their sense of empathy toward poorer communities—both domestically and globally—has shriveled. They may also feel more isolated, and are more likely to pursue high-risk behaviors with a high potential reward so as to extend and repeat the initial high they got when they realized they had the winning ticket.
Two final barriers to collective self-limitation in the modern world need to be mentioned, and they are perhaps the most obdurate of all; they are less psychological in nature, more structural. First: nationalism and patriotism tend to hide our common humanity behind divisive notions of global rivalry and national chauvinism. Hence governments and peoples insist on mistrusting each other, and on refusing to cooperate in rationally facing our collective overshoot predicament.
Second: any collective effort to degrow the economy in order to halt climate change, or for any other reason, must confront the relentless logic of capital expansion. Interest must be paid on existing debt to avoid default; and expectations of higher incomes must be met if politicians are to maintain their approval ratings. In short, the maintenance and reproduction of the system require ever more accumulation of capital and social power.
These two factors support and reinforce each other (even if they occasionally come into conflict). In tandem, they create a cultural matrix that encourages feelings, attitudes, urges, beliefs, actions, and ideas that promote accumulation and growth, while discouraging those that undermine accumulation and growth. Anyone who wants to rein in the depredations of a system that has already grown too big to be maintained over the long term must swim upstream against this powerful current.
Of course, not everyone is in denial, not everyone suffers from optimism bias, and not everyone discounts the future and suffers from the lottery winner’s syndrome. Some of us, like Greta Thunberg, are even immune to the Overton Window. But in all, we have formidable barriers to overcome if we are collectively to understand and respond to the global crises that our way of life is provoking.
* * *
The maximum power principle would seem to suggest there’s little a species like ours can do if it faces problems created by the accumulation of too much power. It’s in our genes, after all, to gain and apply as much power as we can. If we don’t do it, the next organism (or person, or country, or company) will, and we will fall by the wayside in life’s evolutionary struggle.
As we’ve seen in this chapter, self-limitation is in fact widespread and essential in nature. The maximum power principle is a true and useful concept, but it requires a supplement—the optimum power principle—which adds the element of time. Organisms routinely limit power in the present so as to have and use more of it in the long run. There is plenty of precedent for self-regulation in nature and human history.
Nevertheless, as a result of the fossil-fueled Great Acceleration, we have gotten so used to growth in population and consumption (two self-reinforcing feedbacks) that we think such growth is normal and essential. Potential checks on power, which could stop or reverse not only climate change but economic inequality, resource depletion, and other deepening crises, have a lot of momentum to overcome.
Given our current levels of denial, and the vested interests of our elites, the overwhelming likelihood is that humanity is in for a release phase (in terms of the adaptive cycle) that will be unprecedented in its severity. However, that could ultimately help clear the way for a different way of being—the second solution to the Fermi Paradox discussed at the start of this chapter. We could learn to focus our attention on beauty and happiness rather than acquisition of material wealth. We could excel in self-control, rather than seeking to further control nature and other people.
Will we flame out or learn to live within limits? In the next and final chapter, we’ll look to the future of humanity, and the future of power.