Power in the Anthropocene: The Wonderful World of Fossil Fuels
Want to see Richard’s video commentary for this chapter (and get access to webinars too)? Upgrade your Power package.
Civilization is the economy of power, and our power is coal.
― Justus von Liebig
Every basket is power and civilization. For coal is a portable climate. It carries the heat of the tropics to Labrador and the polar circle; and it is the means of transporting itself whithersoever it is wanted. Watt and Stephenson whispered in the ear of mankind their secret, that a half-ounce of coal will draw two tons a mile, and coal carries coal, by rail and by boat, to make Canada as warm as Calcutta, and with its comfort brings its industrial power.
― Ralph Waldo Emerson
Power, speed, motion, standardization, mass production, quantification, experimentation, precision, uniformity, astronomical regularity, control, above all control—these became the passwords of modern society in the Western style.
― Lewis Mumford
Then the coal company came with the world’s largest shovel
—John Prine, “Paradise”
And they tortured the timber and stripped all the land
Well, they dug for their coal till the land was forsaken
Then they wrote it all down as the progress of man.
Even with the advantages of tools and language, it took Homo sapiens 300,000 years to grow its population to one billion (which it achieved around 1820). It took merely another 200 years (from 1820 to 2020) to reach almost eight billion. This last unprecedented growth spurt—let’s call it the Great Acceleration—also saw the overwhelming majority of all the environmental damage that humans have ever caused.1 Whereas other developments in social evolution took thousands of years to come to fruition, the Great Acceleration has occurred in a comparative eyeblink of time.
The consequences for the planet of the Great Acceleration have been so stark that, in the opinion of many scientists, we humans have initiated a new geological epoch—the Anthropocene. Geologists in the distant future (assuming there are any) who look back on rock strata that began forming since the start of the Great Acceleration will note a series of sudden, unmistakable shifts: signs of a substantial alteration of atmospheric and ocean chemistry, evidence of widespread extinctions of animals and plants, higher radioactivity (from atomic weapons tests in the 1950s, if not from nuclear war), and the presence of widely dispersed human artifact traces (notably plastics). The fact can no longer be disputed or ignored: humans are driving unmistakable changes to global natural systems. And those changes are speeding up.
In short, the past two centuries have seen the most rapid, wrenching cultural, ecological, and even geological reordering in the history of our species. And, if the current rate of acceleration continues, it will also constitute a turning point for all life on Earth.
There are two popular explanations for this whirlwind of transformation. One centers on science and technology—which together kicked off a self-reinforcing feedback of innovation, in which each new tool or discovery created conditions in which even more new tools and discoveries could emerge. The other explanation centers on capitalism, recounting the story of how a flood of investment capital and re-invested profits drove innovation upon innovation. Both explanations are partly right: without science, technology, and capital investment, the Great Acceleration would not have occurred. But both explanations are also largely wrong.
There is a third ingredient to the Great Acceleration that technologists and economists often miss or take for granted, and that is the energy derived from fossil fuels. If coal, oil, and natural gas had somehow not existed, we would be living very differently today; far fewer of us would be alive; and our impact on the biosphere, while likely significant, would be dramatically smaller than is currently the case. While I am often described as an advocate for reducing society’s reliance on fossil fuels, I believe we must give credit where it’s due, and I also believe that fossil fuels deserve most of the credit for the Great Acceleration.
Without energy we can do nothing; we are powerless. In early phases of human history, energy came from food, animal muscle, firewood, wind, and water. But there were limits to how much energy we could derive from these sources, and by the mid-19th century we were pressing against those limits.
Fossil fuels changed everything. They presented us with sources of power that were concentrated, storable, and portable. A single barrel of oil contains chemically stored energy that can do the same work as roughly 25,000 hours of human muscle-powered labor. The ability to obtain the equivalent of several years of labor-energy for $55 (the averaged 2019 per-barrel oil price) largely explains the widespread effort, during the past century or two, to mechanize nearly every productive process. Further, for many decades, finding and extracting fossil fuels required comparatively little energy and effort; the energy returned on the energy invested in oil exploration and production was often a spectacular 50:1 or higher (we’ll return to the subject of energy profitability shortly). With so much energy available so cheaply, the limits to what we could do seemed to flee before us.
The end of slavery, the triumph of modern democracy, motorized transportation, the industrial food system, the growth economy, urbanization, consumerism, and a dramatic expansion of the middle class all resulted from the application of ever more fossil energy to resource extraction, manufacturing, and distribution. Population grew far faster than in any previous phase of human history, due to the greater availability of cheaper food, and to better public health enabled by pharmaceuticals and sanitation chemicals often made with fossil fuels. At the same time, warfare became more mechanized and deadlier. A proverbial genie was released from its bottle, and seemingly nothing could put it back.
In this chapter we will explore the extraordinary powers humanity has derived from fossil fuels, and how those powers have changed us and our circumstances. But before we do, we must set the stage with a short discussion of how humans obtained and used energy before the Great Acceleration, so that we can appreciate just how much of a difference fossil fuels have made.
It’s All Energy
As we saw in Chapter 1, every organism lives by harnessing a natural flow of energy. Also, recall that physicists define power as the rate of doing work or transferring energy. Without energy, there can be no power.
Ironically, obtaining energy requires energy. We have to expend energy to hunt prey, to grow and harvest crops, to cut firewood, or to drill an oil well. The biological success of any species, and of any individual within a species, depends on its ability to obtain more energy than it expends in its energy-gathering efforts. The relationship between energy obtained and the energy expended in an energy-harvesting activity can be expressed as a ratio—Energy Returned on Energy Invested, or EROEI, also known as the energy profit ratio. I’ve already noted that the energy profit ratio for oil exploration and extraction historically was 50:1 or higher. Anthropologists and physicists have calculated the EROEI figures for a wide range of human and animal energy-obtaining activities. Unfortunately, while energy profitability is literally a life-or-death issue, and should play a role in public discussions about our energy and food choices, the subject is complex and it is easy to skew numbers one way or the other either by ignoring an important energy input or by overestimating it (output energy, whether barrels of oil or the caloric content of a certain number of bushels of wheat, is usually easier to measure directly). Nevertheless, despite the inherent uncertainties in EROEI figures, I’ll reference a few of these below so that the reader can make rough mental comparisons.
For non-human animals, energy inputs other than ambient heat are nearly all in the form of food. Of course, humans likewise derive energy from food, but we have also found ways of obtaining and using non-food energy (recall the discussion in Chapter 2 of “The Fire Ape”). The expansion of humanity’s powers has thus entailed two fundamental strategies: intensifying food production, and increasing our harvesting and harnessing of non-food energy. Let’s consider pre-fossil-fuel food intensification first.
Hunting and gathering was often an easy way of life with good energy profitability. While EROEI estimates for foraging range widely due to the varied nature of the environments people inhabited and food sources they accessed, Vaclav Smil estimates that for gathering roots, “as many as 30-40 units of food energy were acquired for every unit expended.” For other plant gathering, he figures that typical returns were 10:1 to 20:1. Hunting large animals again yielded a ten-to-twenty-fold energy profit, while hunting small animals was often hardly worth the trouble from an energy standpoint.2 As a rough confirmation of these estimates, a study in the 1960s by anthropologist Richard Lee showed a 10:1 energy profitability for the overall food procurement efforts of the !Kung bushmen of the Kalahari.3
The growing of crops in early agricultural societies also yielded an energy profit, which was again highly variable. In some famine years, not enough grain could be produced even to provide seed for next year’s planting, so no net food energy yield resulted. However, in good years surplus food could be set aside. Once all energy inputs were accounted for, on average the EROEI of traditional agriculture was likely significantly less than 10:1, and possibly lower than 3:1.4 As we saw in Chapter 3, agriculture wasn’t adopted in order to make life easier, as it would have been if it had a high energy profit ratio; rather, it was taken up largely out of necessity in order to support a bigger population.
The route to further food production intensification branched into five interweaving pathways: working harder, using animal labor to supplement human labor, irrigation, fertilization, and growing a greater variety of crops. Let’s explore each of these briefly.
Hard work was inherent in agriculture itself: plowing, harrowing, planting, hoeing (i.e., weeding), harvesting, threshing, winnowing, hauling, and managing stored grain were seasonal activities that required long days of tiring effort. Intensification of food production typically meant cultivating more land, or adding new tasks, such as managing draft animals, to the already long list of farming chores.
Animal labor could supplement human labor for some of the most physically demanding farming activities, such as plowing and hauling. Species and breeds of animals varied in terms of how much power they could apply, how long they could work, and what kinds and quantities of food they ate. For example, a horse could provide up to twice as much useful power as an ox, but the ox could graze and eat straw, while the horse required grain that had to be grown and harvested. Human labor costs and the requirement to grow additional food to feed draft animals (in late-19th century America, a quarter of all agricultural land grew food for horses) reduced their net advantage, but, if managed well, draft animals more than paid for themselves in energy terms.
Irrigation increased yields, sometimes dramatically, in arid areas. However, lifting water from even the shallowest wells entailed work. Human-powered irrigation demanded especially tedious and tiring effort. Animal muscles were useful for this purpose, and were employed for moving water beginning in the earliest civilizations (though not in the Americas). For example, the ancient Greeks moved water by means of a system of clay pots on loops of ropes, powered by a blindfolded animal walking in a circle.5 Some places (e.g., China and Central America) had the problem of too much water, as the result of seasonal flooding. Growing crops under these circumstances required drainage—which again entailed energy inputs. But water-moving efforts could pay off: irrigation and drainage could return up to 30 times their food-energy cost through increased agricultural yields.6
Fertilization was needed due to the depletion of nitrogen, potassium, phosphorus, and trace minerals in soils that were continually cultivated. Nitrogen was the element most quickly lost, and since it determines grain size and protein content, most fertilization systems focused on it. Every imaginable sort of organic matter was used (food waste, straw, stalks, husks, chaff, and leaves were collected and composted), but manures were the primary traditional nitrogen fertilizers. In societies ranging from ancient China to 19th century France, efforts were made to divert and compost human urine and excrement for this purpose—as well as animal wastes, which tended to be more plentiful. Because nitrogen in manures and composts was easily lost to the atmosphere, large quantities of traditional fertilizers had to be applied in order to achieve significant benefits. Thus, significant amounts of farm and urban labor had to be devoted to the unpleasant activities of collecting, hauling, and applying human and animal wastes. It was only the pivotal importance of nitrogen, and the looming potential loss of productivity without it, that could have motivated such efforts.
The employment of greater crop variety provided another, and often more effective, solution to the nitrogen dilemma. The ancient Romans learned that rotating grain crops with legumes like lentils or vetch replenishes the soil. Native Americans similarly planted the “three sisters” (maize, beans, and squash) together, having long ago discovered the synergistic benefits of what modern organic gardeners call “companion planting.” However, it was only after 1750 that European farmers began systematic legume-grain crop rotation. The payoffs in terms of increased yields were dramatic, doubling and even tripling productivity and thus adding to Europe’s general affluence.
Pre-industrial sources of non-food energy included human muscle, draft animals, firewood (and other biomass), wind, and water power. Each of these was slowly adapted through better organization (finding ways to link many small power sources together, such as by harnessing teams of horses); and through technical innovations that focused power or made energy transfer processes more efficient (such as the invention and improvement of harnesses, gears, pulleys, wheels and axles, levers, and inclined planes). As time passed, these early power sources and machines were applied to an increasing variety of tasks.
Draft animals were useful not just for agriculture, but for a range of other activities as well, including hauling, human transport, moving water, and providing motive power for mills. As already mentioned, opportunities for the employment of animal power had to be balanced against the cost of managing the animals and providing them with food and living space. By 1900, Britain’s horse population stood at 4.5 million, and the nation was forced to import grain to feed so many animals. Also, effort was needed in collecting animal feces, not only to benefit agriculture, but also to manage pollution: at the turn of the 20th century, the clogging of the streets of cities like New York, London, and Paris with organic wastes from horses was considered one of the top urban problems.
From ancient times, wood was valued both as fuel and building material. It could be fashioned into tools—from scythe handles to musical instruments—as well as houses and ships. As a fuel, it had the advantage of relatively high energy density (a kilogram of seasoned firewood contains about 15 megajoules of energy; if translated directly to electrical energy without conversion losses, that would be enough to run a large television most of the day). Wood was also widely available—at least in some countries. And its heat, when it was burned, could be used for warming living spaces, heating water, cooking, and even for making glass.
Wood could be partially combusted in an oxygen-poor chamber to produce charcoal, which burned much hotter. This was key for developments in metallurgy, leading to the use of more iron and steel, along with copper, lead, tin, silver, and gold. Superior iron and steel in turn drove improvements in armor, weapons, cooking implements, horseshoes, nails, plowshares, gears, pulleys, and other tools. The downside of charcoal was that a great deal of wood had to be used to produce a modest amount: under average production conditions, a ton of wood yields a quarter-ton of charcoal. In late medieval Europe, charcoal production was a major industry employing hundreds of thousands, and was a significant cause of deforestation, especially in and around the ancient forests near the center of the continent.
The very advantages of wood led to its overuse. Between the years 400 and 1600, Europe’s forest cover shrank from 95 percent of land to only 20 percent, and by the end of that period wood was so hard to come by that blast furnaces could only operate every third year. As wood became scarce, not only did its price rise, but its effective energy value declined: traveling by horse cart for many miles to obtain a load of wood might expend as much energy as the wood contained. Today, this tradeoff still limits wood-fired electricity power-generation plants: if they’re not located close to the forest from which they derive their fuel, transport energy costs reduce the operation’s net energy yield—and over-harvesting can cause the forest boundary to gradually retreat beyond the range of overall energy profitability.7
Other kinds of biomass were also burned for energy, including straw, dung, and peat for heat; and oil, wax, and tallow for illumination. By the early 19th century, parts of Europe faced fuel scarcity; however, eastern North America, endowed with enormous forests, avoided that problem: by the time deforestation was becoming an issue there in the late 19th century, fossil fuels (which the continent also possessed in abundance) were already coming into use.
The use of wind power to move ships via sails started at least 8,000 years ago in Mesopotamia. Sails and rigging went through a long process of refinement for greater efficiency and control, leading eventually to the tall and elegant five-mast clipper ships of the late 19th century. Due to the unpredictability of winds, sailing has always entailed skill and luck, and even the most sophisticated sailing ships can be stalled at sea for days or even weeks at a time due to lack of breeze.
On land, windmills began to be used for grinding grain and pumping water 1500 years ago, with widespread adoption of this technology occurring in the Middle East, China, Europe, India, and Central Asia. In North America, several million windmills dotted 19th-century farms and towns, where they were used mostly for pumping water. In England, Germany, and the Netherlands, during the same period, tens of thousands of windmills provided up to ten horsepower each when working at full capacity, powering mills and pumping water.
Water power was used for grinding grain from the time of ancient Greece, and was also adopted for this purpose in China and Japan. The Romans contributed gears, which enabled millstones to be placed vertically and to turn up to five times faster than the propelling water wheel. Water was used as motive power in early textile factories both in Britain and in the American Northeast, where rivers provided both a means of running looms and a way of moving raw materials and finished goods.
While watermills and windmills were at first primarily employed for grinding grain, they were gradually adapted, via gears and belts, to the operation of a variety of tools such as saws and looms, and to the accomplishment of an array of tasks that included making paper and crushing ores. The economic transformation brought by these powered machines, starting in the late 17th century, is sometimes called the First Industrial Revolution. While its impacts were minor in comparison with those of the fossil-fueled industrialization process that would follow, the latter depended in large measure on the technical innovations that had previously harnessed the power of water and wind.
By the early 19th century, the limits of pre-fossil-fuel industrialism had not been reached, but were within view. Forests in Europe and North America were shrinking or disappearing. There were only so many rivers and streams that could provide power, and available materials and technology limited the height and efficiency of windmills, thus constraining the amount of power they could harness. Food production had expanded, but limits to soil fertility were confounding agricultural scientists (a temporary reprieve appeared with the discovery of huge deposits of guano—nitrogen-rich excreta of sea birds—on islands off the coasts of Chile and Peru, but these were soon mined and exhausted).
Through the centuries, the shape, size, composition, and operations of societies had adapted themselves to slowly diversifying energy sources. As world population arrived at one billion around the year 1820, there seemed little possibility that this number could be doubled. In most nations, 75 to 90 percent of people worked at farming, thus limiting the size of the manufacturing and managerial workforce. Peking, China, the world’s most populous city at the time, held 1.1 million people, while London’s population was roughly a million, Paris held 600,000, and New York encompassed just 100,000. Within these relatively advanced urban centers, most people lived in poverty. A tiny fraction of society enjoyed great wealth, and was able to fund the production of art, music, and literature of the highest quality, some of which could be enjoyed by the rest of society in public galleries and concerts. But for even the richest person, the pace of travel was limited to the speed of sailing ships and horses. Energy use per capita then stood at about 20 gigajoules per year—perhaps twelve times the energy captured and used (via food) by a theoretical pre-human primate of similar body weight, and roughly five times what could be captured and used by a hunter-gatherer using fire but having no agriculture or domesticated animals.8
In short, the energy regime of the pre-fossil-fuel world was renewable, but it was not necessarily sustainable. Europe had already become crowded with humans, whose demands for food- and non-food energy were imperiling ecosystems. For North America, a similar quandary lay within sight, as Native peoples were herded onto reservations so their land could be divided, deforested, farmed, mined, and built upon.
The Coal Train
When the ancient Romans invaded Britain and began to survey their new colony, they found an unfamiliar velvety black stone that could easily be carved into attractive jewelry. It also had the odd characteristic of being flammable. There is no record from that time of coal being burned as a heat source by the Britons or Romans; however, it was used for that purpose in China at roughly the same period. Around the year 1300, Marco Polo conveyed to his Venetian readers some of the oddities of the exotic Chinese civilization, among which was “… A sort of black stone, which they dig out of the mountain, where it runs in veins. When lighted, it burns like charcoal, and retains fire much better than wood.” The Chinese were using coal to heat water for their numerous stoves and baths; and while the country also had an abundance of wood, that fuel “… Must soon prove inadequate to such consumption; whereas these stones may be had in abundance, and at a cheap rate.”9 (See sidebar, “An Arrested Industrial Revolution in China.”)
However, let’s return our attention to Britain, because that’s where much of the subsequent story of coal would unfold in all its dark glory. By the 13th century, forests were retreating quickly from British towns as trees were cut for firewood, for construction materials, and for shipbuilding. “Sea coal” (so called because it was often found on beaches) began to be used as fuel instead of wood. But it was considered inferior to wood because it gave off foul, sulfurous smoke when burned, and there were only so many places where it could be found.
Coal was curious stuff. Early users even thought of it as living, assuming it would grow like a plant if given water and time. As 20th-century science would show, coal started forming toward the end of the Carboniferous period, roughly 320 million years ago. Phases of formation were intermittent, lasting until as recently as 50 million years ago. Climate change probably played a key role in the process: when glaciers melted, waters rose and buried fallen trees and giant horsetail plants, which, instead of decomposing, decayed only partly, leaving behind black carbon. Miners and geologists often found fossils in early coal mines, ranging from tree-like ferns to footprints of cow-sized amphibians.10 These added to the lore and lure of the fuel, conjuring visions of an ancient world of giants, along with speculation about how this lost world might fit into the Bible’s undisputed historical narrative.
Britain grew to be the epicenter of the coal revolution for two main reasons. First, coal was, as we have seen, adopted early on out of necessity, as domestic forests were being decimated, and as people needed an alternative to wood. Second, it came into wide use in Britain of all places by lucky accident, since the nation was endowed with significant coal resources, including coal that was close to the surface and that could easily be dug out with simple tools. When Britons started using coal for heat, in the early medieval period, their island was a cultural backwater in comparison with China, India, the Middle East, and some other areas of Europe. As the British gradually adapted themselves to using coal, their nation became not only a global economic power, but also a center for the development of science and industry.
Increasing demand for coal drove technological developments that in turn drove still more coal demand, in a self-reinforcing feedback loop. When surface coal resources were exhausted, coal mines were opened, and these gradually deepened. But as miners neared the water table, mines tended to fill with water, requiring bucket brigades to empty them temporarily so that mining could proceed. A pump was needed that could move large amounts of water quickly. In the 17th century, Denis Papin had experimented with a toy model of a steam engine, and Thomas Savery later built a small and extremely inefficient steam-driven pump. By 1712, Thomas Newcomen had introduced a 3.5 kilowatt engine to pump water from mines; however, less than one percent of the heat trapped in the coal that fired the engine was converted to work. In 1769, James Watt gained a patent for a new steam engine design that raised efficiency dramatically. Now the steam engine was a practical device that could be used not only to pump water from coal mines, but to provide motive power for a wide range of other projects.
Transporting coal presented another challenge. Mine operators experimented with systems of rails to hold horse-drawn wagons on tracks as they hauled coal from mines to canals or ports, where it could be loaded on barges or ships for the journey to London and other industrial centers. George Stephenson had the idea of combining a steam engine with rails, thus creating the first railroad. Now an engine running on coal could also haul coal across the country. Soon coal-burning steamships were plying the seas, and factories were employing coal to make steel, glass, and a burgeoning array of other materials and products. By the mid-19th century, Britain was burning over half the world’s annual coal budget to power an industrial economy whose sourcing of raw materials and distribution of products stretched around the globe.
With the discovery of coal-tar dyes in 1854, coal seeded the beginnings of the chemical industry. Not only were nearly all early synthetic dyes derived from coal tar, but chemical compounds like carbolic acid, TNT, and saccharin as well. And new chemical corporations like IG Farben grew rich providing them.
Coal could be gasified by heating it in enclosed ovens with an oxygen-poor atmosphere. The result was a mixture of hydrogen, methane, carbon monoxide, and ethylene, which could be burned for heat and light. Called manufactured gas or town gas, the product was fed into a network of pipes to provide street lighting. The first gas utility, the London-based Gas Light and Coke Company, incorporated by royal charter in April 1812, would set the pattern for private utility monopolies that still dominate the distribution of gas and electricity in many nations.
Ironically, coal was now lighting cities at night while also cloaking them in smoky gloom for weeks or months at a time. The following passage from The Smoke of Great Cities by David Stradling and Peter Thorsheim conveys the atmosphere of coal towns:
One visitor to Pittsburgh during a temperature inversion in 1868 described the city as “hell with the lid taken off,” as he peered through a heavy, shifting blanket of smoke that hid everything but the bare flames of the coke furnaces that surrounded the town. During autumn and winter this smoke often mixed with fog to form an oily vapor, first called smog in the frequently afflicted London. In addition to darkening city skies, smoky chimneys deposited a fine layer of soot and sulfuric acid on every surface. “After a few days of dense fogs,” one Londoner observed in 1894, “the leaves and blossoms of some plants fall off, the blossoms of others are crimped, [and] others turn black.” In addition to harming flowers, trees, and food crops, air pollution disfigured and eroded stone and iron monuments, buildings, and bridges. Of greatest concern to many contemporaries, however, was the effect that smoke had on human health. Respiratory diseases, especially tuberculosis, bronchitis, pneumonia, and asthma, were serious public health problems in late-nineteenth-century Britain and the United States.11
Coal mining, especially in its early days, was grimly unhealthy. Many miners succumbed to accidents resulting from asphyxiation by accumulated gas, as well as from explosions, fires, and roof collapses, and most eventually suffered from respiratory ailments, including pneumoconiosis, or black lung disease. Meanwhile, mining often polluted water and air, and degraded forests, streams, and farmland.
Sidebar 18: An Arrested Industrial Revolution in China
America at first lagged behind Britain in coal usage. But the United States was cutting and burning its eastern hardwood forests at a furious pace, and it happened to have abundant underground coal supplies—even larger ones than Britain, as it turned out. By 1885, coal was America’s dominant energy source, and, by the early 20th century, the US was the world’s top coal producer and consumer.
The widespread use of coal in industry quickly swept aside or transformed not just existing technologies, but political and social relations that had been cemented in place centuries earlier. Surely the most salutary societal consequence of the adoption of coal in industry was the ending of slavery. As Charles Babbage, the inventor of the first computer, noted in his 1832 book On Economy and the Machine and Manufacturers, mechanical slaves were already surpassing human slaves in power and speed. For the first time in history, fossil “energy slaves” could supplant the forced labor of millions of human beings. Slavery was a brutal program of enforced social power relations, but its purpose was to organize and efficiently apply energy in order to produce wealth. And the simple fact was that coal-fed machines could produce and apply energy more effectively in a growing number of instances than human or animal muscles could, and thereby yield more wealth. As energy historian Earl Cook wrote in his 1976 book Man, Energy, Society, “The North defeated the South by a campaign of attrition supported by coal mines, steel mills, and railroads.”15
According to historian Lewis Mumford, wage labor first appeared on a mass scale in coal mines.16 Moreover, the first labor unions were created by coal miners, who were also responsible for the earliest and largest industrial strikes, beginning in the early 19th century. Increasingly, energy was emerging from small areas within nations (coal mines) and flowing out through narrow, humanly constructed channels (canals and railways) to operate powerful productive machinery. Unlike the previous agricultural economy, this new coal-powered industrial system employed specialized workers at key nodes along energy’s paths of power, and these workers were frequently abused, underpaid, and subjected to harsh, dangerous, and unhealthy conditions. The coal economy thus proved to be the perfect breeding ground for a new kind of political power. As Timothy Mitchell writes in Carbon Democracy, “The power derived not just from the organisations [that miners, steel workers, and railway workers] formed, the ideas they began to share or the political alliances they built, but from the extraordinary quantities of carbon energy that could be used to assemble political agency, by employing the ability to slow, disrupt, or cut off its supply.”17 The word sabotage was coined to refer to mass work stoppages meant to call attention to intolerable working conditions in coal-mining and related industries. Mitchell goes on:
Modern mass politics was made possible by the development of ways of living that used energy on a new scale. The exploitation of coal provided a thermodynamic force whose supply in the nineteenth century began to increase exponentially. Democracy is sometimes described as a consequence of this change, emerging as the rapid growth of industrial life destroyed older forms of authority and power. The ability to make democratic political claims, however, was not just a by-product of the rise of coal. People forged successful political demands by acquiring a power of action from within the new energy system. They assembled themselves into a new political machine using its processes of operation.18
The period from the 1870s to the First World War has been called both the age of democratization and the age of empire, and both trade union-propelled democracy and steamboat colonialism were shaped by coal. Britain and the European powers had been maintaining overseas colonies since the time of Columbus, but heightened industrial throughput and faster, more reliable seaborne transport opened the opportunity for new ways of exploiting hinterlands. Many raw materials, such as cotton, still depended on land and labor for their production; increasingly, whole nations were repurposed—via conquest, land seizure and privatization, and debt—to supply those materials for the mills of industrial nations.
But if coal triggered a social and economic revolution, it also marked a turning point in another sense: whereas previously the world’s energy regime was renewable though not necessarily sustainable, now it was neither. Every increment of coal extraction was an increment of depletion of a finite store of fuel. Even though coal resources initially appeared vast, dramatically increasing consumption levels would turn what were at first estimated to be thousands of years’ worth of coal into what will probably finally amount to about 300 years of supply, once production levels from the last mines dwindle to insignificance sometime in the next few decades. Britain’s coal industry is already extinct due to the depletion of its resources. The once-abundant coal of America’s northeast is mostly gone. And while China is now by far the world’s top coal producer and consumer, that nation’s extraction of coal is already declining as miners are having to dig deeper and expend more energy to access the fuel. In country after country, coal production is decreasing not just due to government policies to protect the climate, but also because of rising mining costs and declining resource quality.19 There will always be enormous amounts of residual coal left in the ground, but beyond a certain point the amount of energy that will be required to recover the curious black stone will exceed the energy it can provide when burned. With its reliance on coal, humanity began building an ever-expanding castle on an eroding sand bar.
Still, as enormous and unenduring as was the coal-fired societal revolution of the 19th century, an even greater transformation lay in store in the 20th century, due to yet another energy transition.
Oil, Cars, Airplanes, and the New Middle Class
Oil saw limited usage prior to the Industrial Revolution. The Babylonians caulked their ships with bitumen (the thickest of oils), and also used it to waterproof baths and pottery, and as an adhesive to secure weapon handles. Egyptians used it for embalming, and the Bible refers to bitumen being used as a coating for Moses’s basket and Noah’s Ark. The ancient Chinese used bamboo pipes to carry oil and natural gas into homes of the wealthy for heat and light. And ancient Persians and Native Americans used crude oil for purported medicinal benefits.
Petroleum (“rock oil”), bitumen, and natural gas were formed differently from coal during two long, intense periods of global warming, roughly 150 and 90 million years ago. Contrary to the popular notion that oil consists of dinosaur remains, in fact it started mostly as algae, which were buried so quickly as to halt decomposition. Slowly, heat and the pressure of sedimentation turned carbohydrates into hydrocarbons. Whereas coal is mostly carbon, petroleum molecules consist of chains of hydrogen and carbon atoms. These chains are of varying lengths—from methane, the simplest hydrocarbon molecule (which is the main constituent of natural gas), to ethane, propane, butane, pentane, hexane, heptane, octane (the main constituent of gasoline), and even larger molecules. Oil refineries and petrochemical plants use heat to separate these constituents and recombine them to form kerosene, gasoline, diesel fuel, lubricating oils, and the precursors of various plastics, among other chemicals.
The first modern, commercial use of oil was in the form of kerosene burned in lamps for illumination. This was a welcome replacement for whale oil, which was being used for the same purpose, and which was becoming scarce and expensive as a result of the decimation of whale populations worldwide by the whaling industry (as memorialized in Herman Melville’s masterpiece, Moby Dick). The first commercial oil wells were opened in Pennsylvania around 1860; soon, refining and distribution systems also appeared. Customers typically bought small amounts of kerosene and machine lubricating oil in refillable metal cans at the local general store.
As electric lighting became common in cities around the world, the petroleum industry faced a crisis of declining demand. The industry’s salvation came with the invention of the internal combustion engine and the automobile, for which gasoline proved to be an affordable and effective fuel. An explosive byproduct in making kerosene, gasoline was previously discarded as waste. But now, as thousands and soon millions of automobiles began to putter through city streets across the world, oil demand soared.
The automobile offered the power of a large team of horses, but without the requirement for land to grow food for them. As cars proliferated, horses disappeared; and as they did, less agricultural output had to be devoted to feeding draft animals, and more land could be used to grow food for people.
The romance and lure of the automobile were undeniable. It offered easy mobility, which effectively meant increasing power over space and time. Though early cars broke down frequently, their reliability gradually improved. Meanwhile, their cost declined significantly after Henry Ford developed the modern assembly line (the 1903 Ford Model A cost $800, but by 1925 a superior 20-horsepower Model T could be had for $260). While initially a toy for the wealthy, the automobile quickly became—in wealthy countries, at least—an affordable and common transportation appliance.
Cities were redesigned around the needs of the automobile. Whereas in the early 20th century most American cities featured streetcar systems and interurban railways, after World War II most of these were dismantled (partly as a result of purchases by shell companies set up by General Motors, Firestone tire company, and Standard Oil of California). While Europe and Japan kept and improved their train systems, the US concentrated on building highways. As new housing and shopping areas were designed for the convenience of drivers, cities became ever less walkable, thereby making car ownership a near-necessity. Even house designs were affected, as increasing amounts of indoor space were devoted to covering and protecting automobiles.
To gain an impression of the enormous social and economic changes wrought by the automobile, watch (on YouTube) the 1906 pre-earthquake movie taken from the front of a San Francisco streetcar as it made its way along Market Street toward the Embarcadero. Modes of transport (walkers, trolleys, automobiles, and horse carts) mix and jostle at a leisurely pace. In contrast, today’s Market Street is managed to maximize order, efficiency, and safety, with pedestrians allowed on the street only for brief intervals signaled by prominent, colored lights. We assume that automobiles have priority, and stay out of their way if we value our lives.
Diesel engines (which, of course, burn diesel fuel, a heavier blend than gasoline because it contains larger molecules) produce more torque and generally offer better fuel economy than gasoline engines, and were adopted for large trucks, tractors, and most locomotives—except where rail systems were electrified. Ships generally burned an even heavier oil, one that is yielded by refineries after lighter hydrocarbons, including gasoline and diesel fuel, have been removed. The use of oil in long-distance shipping led to the movement of ever-increasing quantities of raw materials and finished goods. Rapid freight movement by rail led to the proliferation of catalog shopping in the early 20th century, while the internet, container shipping, air freight, and diesel trucking support online retail shopping today.
Kerosene, once used primarily for lighting, found a new purpose after the Second World War as fuel for jet engines. In 2019, US airlines alone used over 17 billion gallons of jet fuel to carry nearly a billion passengers and 43 billion pounds of air freight.20
The speed, reliability, and affordability of modern transport modes have been chief enablers of economic expansion. But other oil-fueled activities played essential roles, including fueled resource extraction (via powered shovels, drills, and other mining equipment), powered industrial processes (blast furnaces, cement kilns, foundries, mills, etc.), and powered, and increasingly automated, assembly lines. Altogether, the material throughput of civilization has grown from about seven billion metric tons per year in 1900 to approximately 90 billion metric ton in 2018.21
By about 1930, as oil became the world’s primary energy source, the rapid increase in industrial throughput was leading to a crisis of overproduction: more goods were available than could be absorbed by buyers (the Great Depression was, in part, a result). Leaders of industry hit upon three general solutions: a dramatic expansion of advertising, planned obsolescence (deliberately making products that would quickly wear out or become aesthetically outmoded), and the deployment of consumer credit at an unprecedented scale so that people could consume now and pay later. “The economy” was now, for the first time, regarded as an independent and measurable entity whose maintenance was the business of government and industry. Following World War II, the US federal government, working in tandem with industry, introduced the term “consumer” as an alternative to the more customary terms “citizen” or “person.” Together, manufacturers and government regulators were engineering a new system of societal management, called “consumerism,” whose purpose was to maintain ever-rising levels of commercial activity and employment.
Consumerism was both necessitated and enabled by the growing middle class. The industrialization of agriculture—with tractors, powered mechanical seeders, harvesters, combines, and trucks—meant that fewer people were needed to work at producing food in order to feed the overall populace. As economic opportunity migrated from farms to cities, people followed. The result was a steady, continuing trend toward urbanization. By 2009, for the first time in history, the world’s urban population exceeded its rural numbers.22 Urbanization and mechanization in turn led to the emergence of a bewildering plethora of jobs and occupations, from retail salesperson to office clerk, lawyer, janitor, computer programmer, or registered nurse. Today, the website CareerPlanner.com lists 12,000 different occupations, but does not claim to be comprehensive or exhaustive.23
Whereas agricultural life favored a division of labor between women and men, the overwhelming majority of urban factory and office work could be done equally well by people of any gender. Thus fossil-fueled industrialism also contributed to the liberation of women—though decades of ongoing political activism would be required to gain legal guarantees of equal opportunity and equal pay.
Meanwhile, in the face of rapid continual growth of production and consumption (as well as population), political and commercial leaders came to assume that the growth of the economy (measured in terms of the amount of money changing hands, but actually representing increasing material and energy throughput, as well as population growth) was perfectly normal, indeed something to be maintained by all possible means. By the mid-20th century, government revenues, returns to investors, and stable employment were all understood to depend on economic growth, and all politicians promised more of it.
All of this happened first in the world’s core industrial nations (Western Europe including Britain, the US and Canada, Australia, and Japan; the Soviet bloc industrialized rapidly, but never caught up, pursuing a path separate from the capitalist system and relying to a far greater degree on centralized planning). Beyond these core industrial countries lay nations whose raw materials and cheap labor would be the foundation upon which the global economy would be built, starting in the mid-20th century. The project of managing the aspirations of people in these nations would acquire the name development. It was a project that provided a new rationale for colonialism, based less on the improvement of local economic conditions (its ostensible justification) than on the “development” (i.e., exploitation) of resources, a process managed by bankers eager to saddle impoverished regimes with mountains of debt.24
Also in the mid-20th century, in addition to coal and oil, a third fuel and major driver of industrialization entered the scene. Natural gas—initially a byproduct of oil production, but increasingly a target of specialized drilling and production efforts—had not only replaced town gas, but had become essential for home heating and as a cooking fuel (though some homes continued to heat or cook with coal until the 1960s, or with fuel oil up to the present). It also served as a heat source for industrial processes such as making glass, cement, bricks, and paper. Natural gas would also be used as a basis for the production of an enormous array of chemicals and plastics, and in many nations would largely replace coal for electrical power generation (which we will discuss shortly).
However, perhaps the most crucial use of natural gas, from the perspective of human power over nature, would be for the making of ammonia-based nitrogen fertilizers. Nitrogen had been the limiting factor for agricultural production prior to the advent of fossil fuels; now that limit could be pushed back ever further. As enormous quantities of artificial fertilizer began to be applied, crop yields soared. New hydrocarbon-based pesticides, and the development of high-yield crop varieties, along with nitrogen fertilizers, together comprised what came to be known as the “Green Revolution,” which, starting in the 1950s, more than tripled world agricultural output. As a result, hundreds of millions, if not billions, of human beings were saved from starvation. Together with better sanitation and medicines, the Green Revolution led to the most rapid sustained population growth in history.
The coal age had produced enormous fortunes for owners of mines, steel companies, and railroads, as well as the bankers who funded them. However, the economics of oil flowed differently. Because oil was much easier to transport via pipeline and tanker, petroleum revenue streams were often global in nature. Further, while the United States was the world’s superpower of oil production during the first half of the 20th century, pumping over half the world’s petroleum in most years, even larger amounts of oil and gas happened to be located in poor nations in the Middle East. Thus, the unfolding of the story of oil would hinge on global geopolitics.
Oil was discovered in the Middle East early in the century, but production was delayed, mostly in order to keep global oil prices from collapsing. After World War II, as global petroleum demand soared, the Middle Eastern oilfields quickly came online, but their output continued to be managed by American and British companies and their governments’ foreign policies. Increasingly, while the United States could no longer dominate the world in oil exports, it could still control the global oil market, and hence the global economy. Its primary means of doing so was through its currency, the dollar, which (after 1972) was effectively backed not by gold, but by oil—since nearly all oil trades were carried out in dollars, thus maintaining high demand for the currency and giving a subtle financial advantage to its issuer.
If coal inadvertently stoked democratic aspirations to socio-political power sharing in the late 19th and early 20th centuries, oil did the opposite. Oil production required a much smaller workforce than did coal mining; similarly, moving petroleum via pipelines and tankers required fewer human hands than it took to load and unload coal trains and barges. Further, oil production sites were often far from industrial and population centers. All of these factors made the oil industry less vulnerable to the kinds of industrial labor actions that had wracked the coal industry during the decades leading up to progressive reforms of the early 20th century (the eight-hour workday, the banning of child labor, the right to unionize, etc.). While industrial sabotage was a route to democratization and a principal impediment to elite power within wealthy nations during the coal age, consumerism and economic growth largely kept a lid on radical activism during the oil era. At the same time, oil companies were able to use their immense wealth and their pivotal role in the world’s energy economy to bend governments to their will, funding influential think tanks and industry-friendly candidates to shape policies on agriculture, defense, trade, energy, and the environment.
Instead, it was efforts by Middle Eastern oil-producing nations to take control of their own resources, and the wealth those resources generated, that provided the principal obstacle to elite Anglo-American control of the global economy in the post-WWII era. It would be the tension between the powers of the oil companies and global financial capital on one hand, and the people and their leaders in resource-rich but monetarily poor nations on the other, that would fuel tension and intervention during the oil age.
Meanwhile, the overall contours of the petroleum interval were shaped from the start by depletion. As with coal and other nonrenewable materials, the best-quality oil resources were generally identified, extracted, and burned first. This “low-hanging-fruit” pattern of depletion meant that world oil production would inevitably reach a maximum rate and then decline; indeed, all individual oilfields showed a waxing and waning curve of production over time. But virtually no policy makers planned for a future when the world’s most precious resource—which now supported not only the global economy but the survival of billions of humans through the food and medicines it produced—would inevitably become scarce. The immense power of oil would eventually prove self-limiting, but not before it utterly transformed society and the planet on which we all depend.
Oil-Age Wars and Weapons
In Chapter 3 we saw why warfare and the adoption of new food and energy sources played key roles in cultural evolution during the past 10,000 years. War and energy have continued to shape society right up to the present. During the 20th century, wars were fought with, and over, the world’s newly dominant source of energy, i.e., petroleum; at the same time, the use of oil dramatically increased the lethality of conflict. And wars—especially the two World Wars—shaped technologies and institutions that would in turn determine the contours of social power during longer intervals of peace and economic expansion.
Many historians agree that World War I resulted at least in part from Germany’s determination to expand its colonial holdings to rival those of Britain. Germany had developed science and industry to unparalleled heights, but lacked many resources, including energy resources such as high-quality coal and petroleum (it did have low-quality coal that wasn’t adequate for metallurgy). Britain had created a colonial system based on coal, but by 1910 it was clear that oil would be the energy source of the future. Yet Britain had very little oil, even in its colonial possessions (its largest reserves were in Burma). Germany understood the importance of oil, and was building a Berlin-to-Baghdad railway partly as a means of obtaining Middle Eastern crude.
For the first years of the Great War, both sides were hampered by lack of fuel. Britain converted its fleet of battleships to run on oil, but German submarines were sinking oil tankers from America on their way to England. Meanwhile, after Romania—Germany’s principal petroleum supplier, since the Iraq rail route never came to fruition—sided with the Allies, German industry struggled with shortages of lubricants and gasoline. When the US entered the war on the side of the Allies, its main contribution was not soldiers (welcome though they were), but fuel.
World War I was the first oil-powered war. The tank, essentially an armored and armed gasoline- or diesel-burning tractor, proved itself able to overwhelm infantry, while oil-fueled battleships could out-run and out-maneuver ships running on coal. The Great War saw the first use of airplanes—as well as dirigibles—serving as both weapons and reconnaissance vehicles. Meanwhile the Haber-Bosch chemical process, which would in later decades produce millions of tons of ammonia for nitrogen fertilizer, was initially used to produce ammonia-based explosives.
Each of these new oil- or gas-based technologies would prove pivotal from a strategic standpoint, and deadly from a human perspective. Germany, Britain, and France lost a generation of young men. Altogether, 18 million human beings died in the Great War, more than in any previous conflict.
For all the principal nations involved (Germany, Austria-Hungary, France, Britain, Russia, and to a lesser extent the US) the war required a near-total commitment of resources and personnel. Germany spent 59 percent of its GDP for military purposes, France 54 percent, and Britain 37 percent. Food and fuel were rationed for non-combatants, and all participating nations engaged in widespread pro-war government propaganda and suppression of dissent.25
Michael Mann, in The Sources of Social Power, describes the consequences of the war:
When it was finally over, the three dynastic empires that had started the war were all destroyed, and so was the Ottoman Empire. Nation-states, many of them embodying greater citizen rights, were established almost everywhere around Europe, but the overseas empires remained. British and French power were formally restored, although irreparably damaged, and only Americans and Japanese profited much. The United States had passed from being a major debtor to being the world’s banker, owed massive sums by all the European powers. Japan had acquired German colonies in the Far East, jumping-off posts for later expansion in China and across the Pacific.26
But key issues had not been settled. Germany still saw itself as deserving the status of global power, and still lacked sources of raw materials. And that country was now burdened with paying war reparations that even many observers in victorious nations regarded as ruinously excessive. Especially when seen in light of the scale of bloodshed it entailed, the Great War had been essentially pointless.
The aftermath of World War I saw communist revolution in Russia and the counter-revolutionary emergence of fascism—i.e., ultra-nationalist authoritarianism based in a corporate-state nexus—in Italy (with fascists gaining power in 1921), Germany (1932), and Spain (1936). Extreme ideologies, promising a utopian future, had been unleashed by extreme economic inequality in the case of Russia, and by defeat and humiliation in that of Germany.
In some ways World War II was simply a continuation of WWI, following a generational gap. Hitler scapegoated the Jews for Germany’s defeat and financial situation and called for “lebensraum”—space within which to obtain resources necessary for further growth and industrialization. During WWII, the Nazis would pursue supplies of oil through invasions of Romania, North Africa, and the Soviet Union—which had significant petroleum resources. At the same time, they would seek to overwhelm Britain and France, which had kept Germany from attaining its destiny.
In the Far East, Japan had adopted a quasi-fascist style of government and was likewise pursuing empire and resources—again oil, though also food and cheap labor. The Japanese invaded China in 1937, committing brutal atrocities, with the Soviet Union and the US providing aid to China. The US was Japan’s principal source of imported oil, but Franklin Roosevelt, despite wariness of Japanese expansionism, delayed an embargo in the knowledge that this would be interpreted as an act of war. When an oil and gasoline embargo was eventually declared on August 1, 1941, war became inevitable.
Prosecution of the war, again, largely hinged on oil. Germany initially sought swift, decisive victories through the use of surprise motorized attacks (“Blitzkrieg”). Similarly, Japan’s attack on Pearl Harbor relied on oil-fueled ships, gasoline-burning airplanes, and the element of surprise. Further, the outcome of the war was determined largely by access to oil supplies. The US was again in a favored position, with its large domestic reserves, and was able to cut off Japan’s access to Royal Dutch Shell’s oilfields and refineries in the Dutch East Indies (now Indonesia). And, at an enormous cost in lives, the Soviet Union was able to frustrate Germany’s acquisition of Russian oil. Both Japan and Germany literally ran out of gas, having insufficient fuel for tanks, ships, and airplanes.
Altogether, World War II was the deadliest conflict in world history. More than 60 million died in combat or in the Holocaust—Germany’s industrial-scale effort to exterminate European Jews. While the phrase “total war” was first used in the Great War, its meaning was fully realized in the Second World War, with the indiscriminate bombing of cities (including London, Dresden, Hiroshima, and many others) killing millions of civilians. Virtually all the available resources of the nations involved were directed toward military purposes, propaganda efforts were scientifically designed and coordinated, and the lives of survivors were indelibly stamped with the experiences of wartime—from privation and horror to extreme self-sacrifice and cooperation.
Sidebar 19: Geopolitics: Global Power
During and after both World Wars, mass mobilization and the destruction of capital resulted in a dramatic reduction of economic inequality in all industrial nations. Indeed, according to Walter Scheidel, in his book The Great Leveler, the two World Wars did more to further economic equality than all peacetime government policies such as unemployment insurance, a minimum wage, or progressive taxation.27 In 1938, Japan’s “1%” made off with nearly 20 percent of all income; by seven years later, their share had fallen to 6.4 percent. Similar leveling occurred in victorious nations: students of economic inequality in the United States call the 30 years from 1914 to 1945 the “Great Compression” due to the anomalous drop in income share for the highest earners—with the “1%” losing 40 percent of their national income share. There were many contributing factors, including rationing (more about that in Chapter 6) and the GI Bill in the US. Nations needing to pay for the war instituted high marginal tax rates on income and inherited wealth. The wealthy lost foreign assets, and some industries were nationalized. Altogether the economic history of the 20th century was reshaped profoundly by these two intense, prolonged eruptions of violence.
New technologies continued to play key roles in the Second World War, which saw much greater and more effective use of bombers and fighter planes, including jet fighters (developed in Germany), which now had longer range and carried far deadlier guns and bombs. Germany also introduced guided missiles, which rained terror on London and could not be countered with fighter aircraft. Computers were used for the first time in war, for code breaking (by the British) and for designing nuclear weapons—which the US developed and used against civilian populations in two Japanese cities. Each of these technologies would have enormous ramifications in the post-war economy, via civilian jet aviation, the space program (leading to satellites for telecommunications and global positioning systems, or GPS), digital computing, and nuclear electrical power generation.
The end of World War II saw many countries—Britain, Germany, France, Italy, most of the rest of Europe, Japan, China, and the Soviet Union—in ruins. The colonial systems of Britain and France were essentially finished. One nation, the United States, emerged stronger than ever, now in position to design and lead new global governance institutions, most notably the United Nations. The US was also in position to dominate the world’s new financial system. Henceforth the world’s currency of account would be the dollar, and new US-led institutions, principally the World Bank and International Monetary Fund, would maintain financial order among and between the “developed” (i.e., industrialized) nations and the “developing” nations tasked with supplying resources and cheap labor.
Immediately after the war, the US and USSR, allied during the conflict, divided Europe into spheres of influence. This led to a Cold War, lasting roughly 40 years until the collapse of the Soviet Union in 1991. Though it entailed remarkably few direct fatalities (that is, if proxy conflicts like the Korean War are not counted), the Cold War nevertheless spurred dramatic investments in new military technologies, including nuclear power and new generations of nuclear weapons, rapidly improving computers, and the internet. The microchip was invented for intercontinental ballistic missiles (ICBMs) and fighter planes, while GPS was initially developed for tracking troop movements.
Nuclear weapons raised the risk of conflict to the extinction level, and the continued existence of such weapons makes it likely that at some point they will again be used. During the Cold War, the threat of nuclear annihilation prevented direct confrontation between superpowers. Instead, most conflicts took the forms of guerilla warfare and terrorism, and efforts by powerful nations to suppress these asymmetrical uses of force. A principal example was the Vietnam War, which the US entered unwisely and lost bitterly to a much smaller nation that used guerilla tactics.
During the fossil-fuel age, the lethality of conflict increased steeply, while mortality from violence during peacetime declined in most nations (as Steven Pinker documents in his book The Better Angels of Our Nature). Having access to more energy created the opportunity for shared affluence (even if economic inequality persisted and sometimes worsened), and thus for greater social order. We are knit together increasingly by trade, transport, and communications, but when we fall into irreconcilable disagreement our new powers enable us to lay waste to entire cities, even continents, with guns, bombs, and poisons.
With the fall of the Soviet bloc, the United States emerged as the world’s sole superpower. However, America proceeded quickly to undermine that status with pointless and costly invasions of Afghanistan and Iraq. Meanwhile, China’s growing economic might led to that nation forming ad hoc alliances (such as with Iran and Russia) in order to undermine American hegemony. Corruption and authoritarianism in the Middle East sparked a series of revolutions (of which the most persistent was the Syrian uprising, which drew in Turkey, the US, and Russia), but these were eventually put down. In the US, weariness of interventions, along with domestic political polarization fed by a new form of soft warfare (pioneered by Russia) based on social media technologies, led to the surprising ascendancy of a right-wing populist federal administration of extraordinary incompetence and corruption.
Meanwhile, since 2005, world extraction of conventional petroleum has flatlined. Virtually all new production has come from unconventional sources—such as Canadian oil sands and US “tight oil” produced by hydrofracturing (“fracking”) and horizontal drilling. During the decade starting in 2010, the latter boosted American petroleum production by 6.5 million barrels per day, enough to forestall global shortages and to enable the US to approach energy self-sufficiency. However, tight oil wells deplete especially quickly, raising the prospect that the fracking boom will be not only unprofitable for oil companies and investors (as it has largely been so far), but also extremely short-lived. The all-time peak of the rate of world oil production appears to have been reached in late 2018, and is unlikely ever to be exceeded.
And here we find ourselves. The era of US global dominance, and indeed the oil age itself, is coming to an end, heightening the risks of future economic turmoil, widespread anger provoked by increasing inequality, domestic political violence, and even war.
But we have not finished our exploration of the ways new energy sources have shaped physical and social power in recent history.
Electrifying!
Electricity is not a source of energy; it is a carrier of energy, a means of making it available for use in homes, offices, and factories. But as such, it has revolutionized society just as thoroughly as have coal and oil. Today, we are all plugged in on a near-constant basis, and an interruption in the supply of electrical power can bring society to a standstill.
After the first electricity generator was invented in 1834, it was decades before electricity began to find widespread commercial applications. That’s because, to be useful, electricity requires a system of machines—including, at the very minimum, a generator, transformer, and motor—and all of these had to be invented and perfected before electricity could be put to practical use. Starting in the 1870s, Thomas Edison, a former railroad telegrapher, created what amounted to an invention factory in order to develop new technologies or improve existing ones. It quickly became clear that electricity would ultimately power most of these technologies (including light bulbs, sound recording, and motion pictures), so Edison and his team of engineers also worked to perfect electrical generators, transformers, motors, and distribution systems. While rival Serbian physicist Nikola Tesla (the inventor of the polyphase alternating current motor and generator) ultimately produced more successful designs, in the popular mind the American home-grown Edison was generally regarded as a technological hero, literally bringing light to the world (a pre-echo of the worshipful attention Bill Gates, Steve Jobs, and Silicon Valley would garner in the late 20th century).
Cities were electrified first, as the close proximity of structures made wiring connections relatively cheap. But gradually, over the course of the 20th century, all of rural and urban America (and dozens of other countries) were connected to grids—which distributed electricity from generating plants to end users via transmission lines and transformers. The end result was a giant network of wires and machines that were constantly coordinated and adjusted to make power uniformly available at any moment, day or night, in billions of locations simultaneously.
Marshall McLuhan called electricity an extension of the nervous system. It has revolutionized communications and commerce, speeding up nearly every human activity, both knitting us together with mass communications and ultimately polarizing us through social media. One telling example of electricity’s impact concerns music. Prior to electrification, all music was acoustic, and the guitar was a quiet instrument suitable for the parlor. Today, a plugged-in guitar can shatter eardrums, guitar playing techniques have evolved to take advantage of this greater sonic power, and whole genres of music depend on it. Similar examples could be cited in fields ranging from medicine to gambling. As we saw in Chapter 3, most of the key developments in communication technology (the telegraph, telephone, radio, television, the internet-connected computer, and the smartphone) have occurred just in the last century and a half as a result of electrification.
Electricity must be generated using a primary energy source—whether coal, flowing water, oil, natural gas, nuclear fission, biomass (wood), geothermal heat, wind, or sunlight. When the generation process relies on heat from the burning of a primary energy source such as coal, natural gas, oil, or biomass, or from the fission of uranium atoms, the energy conversion process tends to be highly inefficient. For example, in a coal power plant only about 40 percent of the heat energy in the coal is converted to electrical power. In addition, the transmission of electricity entails energy losses of up to ten percent. We accept such inefficiencies because the product, electricity, is able to deliver energy in a highly convenient and versatile form. Generators of electricity that don’t require conversion—such as wind turbines and solar panels—don’t entail the same losses and inefficiencies. Moreover, electric motors are highly efficient in converting energy to motive power, as compared to fuel-burning engines. That’s why electric cars tend to be less polluting, when all factors are considered, than equivalent-sized gasoline-burning cars, even if the electricity with which the e-car was charged was generated using coal; they are, of course, far less polluting if charged with power generated by solar, wind, or hydro.
Nuclear electricity generation began in the 1950s as a result of the Atoms for Peace program (which was, in part, an effort by the nuclear weapons industry to create a more favorable public image for itself). Initial promises that nuclear electricity would be “too cheap to meter” proved unfounded. Nuclear power plants have been expensive to build, requiring government assistance in financing and insurance. The power generation industry typically measures costs as averaged over the full life cycle of a power plant; in these terms, the “levelized” cost of nuclear power is higher than that of coal, natural gas, or hydro. Most nuclear power plants are, in effect, high-tech steam kettles, with heat from nuclear reactions used to create steam, which then turns turbines to generate power. The nuclear reactions do not create greenhouse gases; unfortunately, however, they do produce nuclear waste, which remains dangerously radioactive for centuries, creating storage issues that have proven difficult to solve. Thus, nuclear power has always been controversial, especially following widely publicized accidents such as the Three Mile Island disaster (Pennsylvania, 1979), the Chernobyl disaster (Ukraine, 1986), and the Fukushima reactor meltdown (Japan, 2011). The “World Nuclear Industry Status Report” of 2019 concluded: “Stabilizing the climate is urgent, nuclear power is slow. It meets no technical or operational need that these low-carbon competitors [i.e., solar and wind power] cannot meet better, cheaper, and faster.”28
Hydro power generation costs tend to be extremely low—much lower than nuclear, natural gas, or coal; however, large hydro dams entail construction projects that require massive initial investment and often degrade or even destroy natural river ecosystems. And there are only so many rivers that can be dammed. A global survey suggests the available resources could provide for a doubling of world hydro power, but only by ignoring concerns about sensitive environments that would likely be impacted.
As a result of all these constraints and trade-offs, many energy analysts consider solar and wind power to be our best bets as future energy sources. But currently these sources supply only a few percent of world energy. (We’ll explore the prospects for renewable energy transition in more depth in the next chapter.)
Maintaining our reliance on fossil fuels over the long run is simply not an option, given the twin problems of climate change and resource depletion. Maintaining our current patterns of producing, distributing, and using electricity is also problematic. The grid has been called history’s biggest machine. As such, it is highly complex and vulnerable to an array of threats, several of which are foreseeable. The following are scenarios in which a general, persistent loss of electrical power could occur; some are near-certainties over the long run:
- Acts of war—including not just bombs and bullets, but electronic hacking of the software that keeps grids functioning.
- A Carrington event (i.e., a solar storm capable of knocking out electronic devices and transmission systems globally); the last one happened in 1859 and another could occur just about any time.
- Simple depletion of resources that keep grids up and running now: coal, natural gas, and uranium are all finite substances; supply problems are not anticipated immediately, but could occur within the next few decades. (Over the longer term, the depletion of copper, high-grade silicon, lithium, and other mineral resources will limit the build-out and repair of grids and the electronic technologies that depend on them.)
- Neglect of infrastructure and a deteriorating transmission grid (grid operators have been sounding the alarm on this score for the last couple of decades).
- Impacts of climate change, including but not limited to: disruption of seasonal rainfall that enables hydro power plants to operate; high summer heat that could warm lakes and rivers to the point where nuclear power plants couldn’t be cooled properly; and the need to preemptively shut down grids to avoid starting seasonal wildfires in fire-prone regions (as I write, nearby regions of my home state of California are experiencing blackouts planned for this reason).
Most readers have probably had the experience of living through a power blackout lasting for hours; some may recall one lasting for days. Everyone who has had such an experience understands that, while electrical power may have started out as a convenience or a luxury, it has become essential to modern life. Without electrical power, the gasoline pumps at service stations do not operate. Credit card readers do not work. If backup generators run out of fuel, municipal water plants cease to function. Yet we have created an electrical power system that is not just vulnerable, but unsustainable in its current form.
The Human Superorganism
As noted in Chapter 1, humans are ultrasocial, similar in this respect to ants or bees. In chapters 2 and 3, we saw how human beings have developed ever more ways of cooperating. We explored the ways in which language and complex social organization (driven by warfare and new energy sources) have enabled more coordination among individuals. As cultures evolved, writing, money, and Big God religions knitted us even closer together. With fossil fuels and electrification, and thus greater mobility and instant communications, we have indeed become a human “hive.”
Any one of us can walk into a store, a bank, or a hotel, and, assuming we have the appropriate symbolic tokens (cash or a credit card), we can immediately do business with complete strangers. We have faith that strangers will protect us in the case of physical threat, treat our injuries, and make sure we have water and electricity. When we enter a restaurant and sit at a table, we and the server don’t have to spend many tense minutes determining whether one of us has hostile intentions, or whether we are of the same clan. The general shape of the interaction is already agreed upon, as the result of centuries of server-patron encounters at millions of restaurants. We look at the menu, order food, and within minutes we’re eating. A tip is expected, and we leave one. That’s cooperation. And it makes us, as a species, collectively powerful.
Acting together, we have the ability to commandeer and direct flows of energy originating in millions of years of stored ancient sunlight. We have the power to extract thousands of different resources scattered across the planet, transport them, transform them with thermal and chemical processes, and assemble them into a numbing array of consumer products. Because we act together in so many ways and contexts, we as individuals can travel around the globe in hours—even minutes in the case of astronauts. We can see events happening far away as they occur, communicate instantly with co-workers, or press a key and have a product delivered to our doorstep in hours. And we can devastate entire ecosystems without even realizing what we are doing.
We take these levels and kinds of cooperative power (whether creative or destructive) for granted. Indeed, we often think that our biggest problem, as humans, is that we aren’t cooperative enough. After all, we squabble endlessly over politics—so much so that it’s often hard for democracies to get anything done. We bicker and fight over inequality and access to resources. We’re so fiercely competitive that we even create objectively meaningless opportunities for competition via sports teams and their ritualized clashes.
But even when we compete, squabble, bicker, and clash, it is within the context of extraordinary degrees of cooperation. Within our teams, we work in synchrony, and cheer on our teammates. Indeed, war—the most lethally competitive activity humans engage in—makes us even more cooperative than we are normally, willing to sacrifice even our lives for the sake of the “team.”
We evolved from primates that lived in small groups and that cooperated little more than hyenas, wolves, crows, or dozens of other social animals do. But gradually, bit by bit, over millions of years, we developed teams that were bigger and more coordinated, until they comprised hundreds, then thousands, then millions, and now billions of increasingly specialized humans. The end result is a global team that acts, at least to some degree, as a collective unity, together making up the human Superorganism—which wields vastly more power than any other single organism in Earth history.29
The idea that humanity currently comprises a Superorganism is propounded by sociobiologists like Edward O. Wilson and cultural evolution theorists such as Peter Turchin. Kevin Kelly, founder of Wired magazine, popularized the notion among techno-geeks with his 1994 book Out of Control.30 While there are skeptics, particularly among some traditional evolutionary theorists, the idea that ultrasociality can lead to the emergence of superorganisms explains how and why groups of individual organisms can exhibit coordinated collective action, collective homeostasis (in which the group as a whole maintains self-regulation of various parameters, such as defense of group boundaries), and emergent behaviors (that is, behaviors of the group that emerge from the relationships of individuals to one another, and that cannot be predicted on the basis of a thorough knowledge of the individuals themselves). The superorganism is more than the sum of individual decisions. Indeed, once a superorganism emerges, individual choices tend to be constrained by the demands of the collective entity.
The global human Superorganism (with a capital “S,” signifying the entity that now encompasses our entire species) is still quite young. We became ultrasocial during the last 10,000 years, especially as we began living among strangers in cities. For the next few millennia, nations acted as superorganisms, cooperating internally to accomplish their overarching goals. But the birth of the universal human Superorganism occurred much more recently, in the late 20th century, with the advent of the global economy and instant global communications. That certainly wasn’t long ago in terms of biological evolution, or even cultural evolution.
The sheer power of the Superorganism is staggering. Each year, it cuts and uses up to seven billion trees; it excavates 24 million metric tons of copper and nine billion short tons of coal; and it produces (and brews and drinks) one billion tons of coffee. In order to mine resources and construct buildings and highways, it moves up to 80 billion tons of soil and rock—over twice the amount moved by all natural forces such as rivers, glaciers, ocean, and wind.31 The Superorganism is taking control of Earth, and the Anthropocene is its era of dominance.
Some still argue that the power of the Superorganism is just the sum of the powers of eight billion humans. But if not for the coordination and cooperation that enable the existence of the Superorganism, there wouldn’t be eight billion humans, and each of the non-networked humans who did exist would be wielding far less power per capita than is the case today. On one level, the Superorganism may be merely a concept or metaphor, but it is a useful one because it points to a reality that it is crucially important for us to understand and grapple with if our species is to survive the 21st century. We’ll return to a discussion of the Superorganism in chapters and 6 and 7.
* * *
In sum: coal, oil, and gas have increased our individual and collective powers enormously. They have given us previously unimaginable wealth and comfort, enabled us to explore other planets, and much more. Through the development and exercise of these greatly expanded powers, we have changed not only the nature of human existence, but also the way our planet functions. And we have inadvertently created a collective global entity, the most powerful in Earth history, which we are only beginning to understand. One thing is clear: our world-shaping power is bringing us to a crossroads.
Chapter commentary
Everything until now in the book is really just leading up to humanity’s unleashing of fossil fuels—when all hell breaks loose. Chapter 4 is about understanding how and why fossil fuels are at the heart of humanity’s survival dilemma.