Prof. Andrew Watson, University of Saskatchewan.
This is the third post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, and ActiveHistory.ca.
There is no longer any debate. Humanity sits at the precipice of catastrophic climate change caused by anthropogenic greenhouse gas (GHG) emissions. Recent reports from the Intergovernmental Panel on Climate Change (IPCC) and the U.S. Global Change Research Program (USGCRP) provide clear assessments: to limit global warming to 1.5ºC above historic levels, thereby avoiding the most harmful consequences, governments, communities, and individuals around the world must take immediate steps to decarbonize their societies and economies.
Change is coming regardless of how we proceed. Doing nothing guarantees large-scale resource conflicts, climate refugee migrations from the global south to the global north, and mass starvation. Dealing with the problem in the future will be exceedingly more difficult, not to mention expensive, than making important changes immediately. The only question is what changes are necessary to address the scale of the problem facing humanity? Do we pursue strategies that allow us to maintain our current standard of living, consuming comparable amounts of (zero-carbon) energy? Or do we accept fundamental changes to humanity’s relationship to energy?
In his new book, The Wizard and the Prophet: Two Remarkable Scientists and Their Conflicting Visions of the Future of Our Planet, Charles C. Mann uses the life, work, and ideologies of Norman Borlaug (the Wizard) and William Vogt (the Prophet) to offer two typologies of twentieth century environmental science and thought. Borlaug represents the school of thought that believed technology could solve all of humanity’s environmental problems, which Mann refers to as “techno-optimism.” Vogt, by contrast, represents a fundamentally different attitude that saw only a drastic reduction in consumption as the key to solving environmental problems, which Mann (borrowing from demographer Betsy Hartmann) refers to as “apocalyptic environmentalism.”
In the industrialized countries of the world, the techno-optimist approach enjoys the greatest support. Amongst those who think “technology will save us,” decarbonizing the economy means replacing fossil fuel energy with “clean” energy (i.e. energy that does not emit GHGs). Hydropower has nearly reached it global potential, and simply cannot replace fossil fuel energy. Solar, wind, and to some extent geothermal, are rapidly growing technological options for replacing fossil fuel energy. And as this series reveals, some debate exists over whether nuclear can ever play a meaningful role in a twenty-first century energy transition.
The quest for new clean energy pathways aims to rid the developed world of the blame for causing climate change without the need to fundamentally change the way of life responsible for climate change. In short, those advocating for clean energy hope to cleanse their moral culpability as much as the planet’s atmosphere. This is the crux of the climate change crisis and the challenge of how to respond to it. It is not a technical problem. It is a moral and ethical problem – the biggest the world has ever faced.
The USGCRP’s Fourth National Climate Assessment warns that the risks from climate change “are often highest for those that are already vulnerable, including low-income communities, some communities of color, children, and the elderly.” Similarly, the IPCC’s Global Warming of 1.5ºC report insists that “the worst impacts tend to fall on those least responsible for the problem, within states, between states, and between generations.” Furthermore, the USGCRP points out, “Marginalized populations may also be affected disproportionately by actions to address the underlying causes and impacts of climate change, if they are not implemented under policies that consider existing inequalities.” Indeed, the IPCC reports, “the worst-affected states, groups and individuals are not always well-represented” in the process of developing climate change strategies. The climate crisis has always been about the vulnerabilities created by energy inequalities. Decarbonizing the industrialized and industrializing parts of the world has the potential to avoid making things any worse for the most marginalized segments of the global population, but it wouldn’t necessarily make anything better for them either. At the same time, decarbonization strategies imagine an energy future in which people, communities, and countries with a high standard of living are under no obligation to make any significant sacrifices to their large energy footprints.
Over the last thirty years, industrialized countries, such as Germany, the United States, and Canada have consistently consumed considerably more energy per capita than non-industrialized or industrializing countries (Figure 1). In 2016, industrialized countries in North America and Western Europe consumed three to four times as much energy per capita as the global average, while non-industrialized countries consumed considerably less than the average.
Most of the research that has modelled 1.5ºC-consistent energy pathways for the twenty-first century assume that decarbonisation means continuing to use the same amount of, or only slightly less, energy (Figure 2). Most of these models project that solar and wind energy will comprise a major share of the energy budget by 2050 (nuclear, it should be noted, will not). Curiously, the models also project a major role for biofuels as well. Most alarmingly, however, most models assume major use of carbon capture and storage technology, both to divert emissions from biofuels and to actively pull carbon out of the atmosphere (known as carbon dioxide reduction, or negative emissions). The important point here, however, is not the technological composition of these energy pathways, but the continuity of energy consumption over the course of the twenty-first century.
In case it is not already clear, I do not think technology will save us. Solar and wind energy technology has the potential to provide an abundance of energy, but it won’t be enough to replace the amount of fossil fuel energy we currently consume, and it certainly won’t happen quickly enough to avoid warming greater than 1.5ºC. Biofuels entail a land cost that in many cases involves competition with agriculture and places potentially unbearable pressure on fresh water resources. Carbon capture and storage assumes that pumping enormous amounts of carbon underground won’t have unintended and unacceptable consequences. Nuclear energy might provide a share of the global energy budget, but according to many models, it will always be a relatively small share. Techno-optimism is a desperate hope that the problem can be solved without fundamental changes to high-energy standards of living.
The current 1.5ºC-consistent energy pathways include no meaningful changes in the amount of overall energy consumed in industrialized and industrializing countries. The studies that do incorporate “lifestyle changes” into their models feature efficiencies, such as taking shorter showers, adjusting indoor air temperature, or reducing usage of luxury appliances (e.g. clothes dryers); none of which present a fundamental challenge to a western standard of living. Decarbonization models that replace fossil fuel energy with clean energy reflect a desire to avoid addressing the role of energy inequities in the climate change crisis.
Climate change is a problem of global inequality, not just carbon emissions. Those of us living in the developed and developing countries of the world would like to pretend that the problem can be solved with technology, and that we would not then need to change our lives all that much. In a decarbonized society, the wizards tell us, our economy could continue to operate with clean energy. But it can’t. Any ideas to the contrary are simply excuses for perpetuating a world of incredible energy inequality. We need to heed the prophets and use dramatically less energy. We need to accept extreme changes to our economy, our standard of living, and our culture.
Andrew Watson is an assistant professor of environmental history at the University of Saskatchewan.
 IPCC, 2018: Global warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty [V. Masson-Delmotte, P. Zhai, H. O. Pörtner, D. Roberts, J. Skea, P.R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J. B. R. Matthews, Y. Chen, X. Zhou, M. I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, T. Waterfield (eds.)]. In Press.
 USGCRP, 2018: Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment, Volume II[Reidmiller, D.R., C.W. Avery, D.R. Easterling, K.E. Kunkel, K.L.M. Lewis, T.K. Maycock, and B.C. Stewart (eds.)]. U.S. Global Change Research Program, Washington, DC, USA. doi: 10.7930/NCA4.2018.
 Charles C. Mann, The Wizard and the Prophet: Two Remarkable Scientists and Their Conflicting Visions of the Future of Our Planet (Picador, 2018), 5-6.
 USGCRP, Fourth National Climate Assessment, Volume II, Chapter 1: Overview.
 IPCC, Global warming of 1.5°C, Chapter 1.
 IPCC, Global warming of 1.5°C; Detlef P. van Vuuren, et al., “Alternative pathways to the 1..5°C target reduce the need for negative emission technologies,” Nature Climate Change, Vol.8 (May 2018): 391-397; Joeri Rogelj, et al., “Scenarios towards limiting global mean temperature increase below 1.5°C,” Nature Climate Change, Vol.8 (April 2018): 325-332.
 Mariësse A.E. van Sluisveld, et al., “Exploring the implications of lifestyle change in 2°C mitigation scenarios using the IMAGE integrated assessment model,” Technological Forecasting and Social Change, Vol.102 (2016): 309-319.
Prof. Dagomar Degroot, Georgetown University.
Roughly 11,000 years ago, rising sea levels submerged Beringia, the vast land bridge that once connected the Old and New Worlds. Vikings and perhaps Polynesians briefly established a foothold in the Americas, but it was the voyage of Columbus in 1492 that firmly restored the ancient link between the world’s hemispheres. Plants, animals, and pathogens – the microscopic agents of disease – never before seen in the Americas now arrived in the very heart of the western hemisphere. It is commonly said that few organisms spread more quickly, or with more horrific consequences, than the microbes responsible for measles and smallpox. Since the original inhabitants of the Americas had never encountered them before, millions died.
The great environmental historian Alfred Crosby first popularized these ideas in 1972. It took over thirty years before a climatologist, William Ruddiman, added a disturbing new wrinkle. What if so many people died so quickly across the Americas that it changed Earth’s climate? Abandoned fields and woodlands, once carefully cultivated, must have been overrun by wild plants that would have drawn huge amounts of carbon dioxide out of the atmosphere. Perhaps that was the cause of a sixteenth-century drop in atmospheric carbon dioxide, which scientists had earlier uncovered by sampling ancient bubbles in polar ice sheets. By weakening the greenhouse effect, the drop might have exacerbated cooling already underway during the “Grindelwald Fluctuation:” an especially frigid stretch of a much older cold period called the “Little Ice Age."
Last month, an extraordinary article by a team of scholars from the University College London captured international headlines by uncovering new evidence for these apparent relationships. The authors calculate that nearly 56 million hectares previously used for food production must have been abandoned in just the century after 1492, when they estimate that epidemics killed 90% of the roughly 60 million people indigenous to the Americas. They conclude that roughly half of the simultaneous dip in atmospheric carbon dioxide cannot be accounted for unless wild plants grew rapidly across these vast territories.
On social media, the article went viral at a time when the Trump Administration’s wanton disregard for the lives of Latin American refugees seems matched only by its contempt for climate science. For many, the links between colonial violence and climate change never appeared clearer – or more firmly rooted in the history of white supremacy. Some may wonder whether it is wise to quibble with science that offers urgently-needed perspectives on very real, and very alarming, relationships in our present.
Yet bold claims naturally invite questions and criticism, and so it is with this new article. Historians – who were not among the co-authors – point out that the article relies on dated scholarship to calculate the size of pre-contact populations in the Americas, and the causes for their decline. Newer work has in fact found little evidence for pan-American pandemics before the seventeenth century.
More importantly, the article’s headline-grabbing conclusions depend on a chain of speculative relationships, each with enough uncertainties to call the entire chain into question. For example, some cores exhumed from Antarctic ice sheets appear to reveal a gradual decline in atmospheric carbon dioxide during the sixteenth century, while others apparently show an abrupt fall around 1590. Part of the reason may have to do with local atmospheric variations. Yet the difference cannot be dismissed, since it is hard to imagine how gradual depopulation could have led to an abrupt fall in 1590.
To take another example, the article leans on computer models and datasets that estimate the historical expansion of cropland and pasture. Models cited in the article actually suggest that the area under human cultivation steadily increased from 1500 until 1700: precisely the period when its decline supposedly cooled the Earth. Yet the increase makes sense, for the world’s human population likely increased by as many as 100 million people over the course of the sixteenth century. Meanwhile, merchants and governments across Eurasia depleted woods to power new industries and arm growing militaries.
Changes in the extent and distribution of historical cropland, 3000 BCE to the present, according to the HYDE 3.1 database of human-induced global land use change.
In any case, models and datasets may generate tidy numbers and figures, but they are by nature inexact tools for an era when few kept careful or reliable track of cultivated land. Models may differ enormously in their simulations of human land use; one, for example, shows 140 million more hectares of cropland than another for the year 1700. Bear in mind that, according to the new article, the abandonment of just 56 million hectares in the Americas supposedly cooled the planet just a century earlier!
If we can make educated guesses about land use changes across Asia or Europe, we know next to nothing about what might have happened in sixteenth-century Africa. Demographic changes in that vast and diverse continent may well have either amplified or diminished the climatic impact of depopulation in the Americas. And even in the Americas, we cannot easily model the relationship between human populations and land use. Surging populations of animals imported by Europeans, for example, may have chewed through enough plants to hold off advancing forests. The early death toll in the Americas was often also especially high in communities at high elevations: where the tropical trees that absorb the most carbon could not go.
In short, we cannot firmly establish that depopulation in the Americas cooled the Earth. For that reason, it is missing the point to think of the new article as either “wrong” or “right;” rather, we should view it as a particularly interesting contribution to an ongoing academic conversation. Journalists in particular should also avoid exaggerating the article’s conclusions. The co-authors never claim, for example, that depopulation “caused” the Little Ice Age, as some headlines announced, nor even the Grindelwald Fluctuation. At most, it worsened cooling already underway during that especially frigid stretch of the Little Ice Age.
For all the enduring questions it provokes, the new article draws welcome attention to the enormity of what it calls the “Great Dying” that accompanied European colonization, which was really more of a “Great Killing” given the deliberate role that many colonizers played in the disaster. It also highlights the momentous environmental changes that accompanied the European conquest. The so-called “Age of Exploration” linked not only the Americas but many previously isolated lands to the Old World, in complex ways that nevertheless reshaped entire continents to look more like Europe. We are still reckoning with and contributing to the resulting, massive decline in plant and animal biomass and diversity. Not for nothing do some date the “Anthropocene,” the proposed geological epoch distinguished by human dominion over the natural world, to the sixteenth century.
All of these issues also shed much-needed light on the Little Ice Age. Whatever its cause, we now know that climatic cooling had profound consequences for contemporary societies. Cooling and associated changes in atmospheric and oceanic circulation provoked harvest failures that all too often resulted in famines. In community after community, the malnourished repeatedly fell victim to outbreaks of epidemic disease, and mounting misery led many to take up arms against contemporary governments. Some communities and societies were resilient, even adaptive in the face of these calamities, but often partly by taking advantage of the less fortunate. Whether or not the New World genocide led to cooling, the sixteenth and seventeenth centuries offer plenty of warnings for our time.
My thanks to Georgetown environmental historians John McNeill and Timothy Newfield for their help with this article, to paleoclimatologist Jürg Luterbacher for answering my questions about ice cores, and to the many colleagues who responded to my initial reflections on social media.
Archer, S. "Colonialism and Other Afflictions: Rethinking Native American Health History." History Compass 14 (2016): 511-21.
Crosby, Alfred W. “Conquistador y pestilencia: the first New World pandemic and the fall of the great Indian empires.” The Hispanic American Historical Review 47:3 (1967): 321-337.
Crosby, Alfred W. The Columbian Exchange: Biological and Cultural Consequences of 1492. Westport: Greenwood Press, 1972. Alfred W. Crosby, Ecological Imperialism: The Biological Expansion of Europe, 900-1900, 2nd Edition. Cambridge: Cambridge University Press, 2004.
Degroot, Dagomar. “Climate Change and Society from the Fifteenth Through the Eighteenth Centuries.” WIREs Climate Change Advanced Review. DOI:10.1002/wcc.518
Degroot, Dagomar. The Frigid Golden Age: Climate Change, the Little Ice Age, and the Dutch Republic, 1560-1720. New York: Cambridge University Press, 2018.
Gade, Daniel W. “Particularizing the Columbian exchange: Old World biota to Peru.” Journal of Historical Geography 48 (2015): 30.
Goldewijk, Kees Klein, Arthur Beusen, Gerard Van Drecht, and Martine De Vos, “The HYDE 3.1 spatially explicit database of human‐induced global land‐use change over the past 12,000 years.” Global Ecology and Biogeography 20:1 (2011): 73-86.
Jones, Emily Lena. “The ‘Columbian Exchange’ and landscapes of the Middle Rio Grande Valley, AD 1300– 1900.” The Holocene (2015): 1704.
Kelton, Paul. "The Great Southeastern Smallpox Epidemic, 1696-1700: The Region's First Major Epidemic?". In R. Ethridge and C. Hudson, eds., The Transformation of Southeastern Indians, 1540-1760.
Koch, Alexander, Chris Brierley, Mark M. Maslin, and Simon L. Lewis. “Earth system impacts of the European arrival and Great Dying in the Americas after 1492.” Quaternary Science Reviews 207 (2019): 13-36
McCook, Stuart. “The Neo-Columbian Exchange: The Second Conquest of the Greater Caribbean, 1720-1930.” Latin American Research Review 46: 4 (2011): 13.
McNeill, J. R. “Woods and Warfare in World History.” Environmental History, 9:3 (2004): 388-410.
Melville, Elinor G. K. A Plague of Sheep: Environmental Consequences of the Conquest of Mexico. Cambridge: Cambridge University Press, 1997.
PAGES2k Consortium, “A global multiproxy database for temperature reconstructions of the Common Era.” Scientific Data 4 (2017). doi:10.1038/sdata.2017.88.
Parker, Geoffrey. Global Crisis: War, Climate Change and Catastrophe in the Seventeenth Century. New Haven: Yale University Press, 2013.
Sigl, Michael et al., "Timing and climate forcing of volcanic eruptions for the past 2,500 years." Nature 523:7562 (2015): 543.
Riley, James C. "Smallpox and American Indians Revisited." Journal of the History of Medicine and Allied Sciences 65 (2010): 445-77.
Ruddiman, William. “The Anthropogenic Greenhouse Era Began Thousands of Years Ago.” Climatic Change 61 (2003): 261–93.
Ruddiman, William. Plows, Plagues, and Petroleum: How Humans Took Control of Climate. Princeton, NJ: Princeton University Press, 2005
Williams, Michael. Deforesting the Earth: From Prehistory to Global Crisis. Chicago: University of Chicago Press., 2002.
Prof. Kate Brown, MIT
This is the second post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, andActiveHistory.ca.
Climate change is here to stay. So too for the next several millennia is radioactive fallout from nuclear accidents such as Chernobyl and Fukushima. Earthlings will also live with radioactive products from the production and testing of nuclear weapons. The question as to whether next generation technologies of nuclear power plants will be, as their promoters suggest, “perfectly safe” appears to decline in importance as we consider the catastrophic outcome of continued use of carbon-based fuels. Sea levels rising 10 feet, temperatures warming 3 degrees Celsius, tens of millions of climate refugees on the move. These predicted climate change catastrophes make nuclear accidents such as the 1986 Chernobyl accident look like a tiny blip in planetary time.
Or maybe not. It is hard to compare an event in the past to one in the future that has not yet occurred. I have found researching for the past four years the medical and environmental history of the Chernobyl disaster that the health consequences were far greater than has been generally acknowledged. Rather than 35 to 54 fatalities recorded by UN agencies, the count in Ukraine alone (which received the least amount of radioactive fallout of the three affected Soviet republics) ranges between 35,000 and 150,000 fatalities from exposures to Chernobyl radioactivity. Instead of 200 people hospitalized after the accident, my tally from the de-classified archives is at least 40,000 people in the three most affected republics just in the summer months following the disaster.
We don’t have to focus just on human health to worry about the future of humans on earth. Following biologists around the Chernobyl Zone the past few years, I learned that in the most contaminated territories of the Chernobyl Zone radioactivity has knocked out insects and microbes that are essential for the job of decomposition and pollination. Biologists Tim Mousseau and Anders Møller found radical decreases in pollinators in highly contaminated areas; the fruit flies, bees, butterflies and dragonflies were decimated by radioactivity in soils where they lay their eggs. They found that fewer pollinators meant less productive fruit trees. With less fruit, fruit-eating birds like thrushes and warblers suffered demographically and declined in number. With few frugivores, fewer fruit trees and shrubs took root and grew. The team investigated 19 villages in a 15-kilometer circle around the blown plant and found that just two apple trees had seed in two decades after the 1986 explosion.?1 The loss of insects, especially pollinators, we know, spells doom for humans on earth.?2 There are, apparently, many ways for our species to go extinct. Climate change is just one possibility.
Since Chernobyl, fewer corporations have been interested in building and maintaining nuclear power plants. In the past few decades, the cycle of nuclear power—building, maintaining, disposing of waste, and liability—has proven economically unfeasible and is winding down. Faced with intractable problems, regulations on classifying and cleaning up waste are being watered down. Westinghouse, the last U.S builder of nuclear reactors, went bankrupt in 2017. It was bought out and struggles to complete orders for its AP1000 reactors. Now China and Russia are the main producers of reactors for civilian power. We don’t know much about China’s nuclear legacy. We know Russia’s safety record is dismal. Meanwhile, in most countries with nuclear reactors, an aging population of nuclear power operators, nuclear physicists, and radiation monitors is not being replaced by a younger generation.
Probably the greatest obstacle to backing nuclear power as an alternative fuel is that we have run out of time. The long promised fusion reactors promoted with the billion-dollar might of the likes of Bill Gates and Jeff Bezos are still decades in the future. Roy Scranton estimates in Learning to Die that we would have to have on line 12,000 new conventional nuclear power reactors in order to replace petro-carbon fuels. It takes a decade or two to build a reactor. Conventional and fusion reactors would come on line at a time when the major coastal cities they would power are predicted to be underwater.
In short, for a host of economic and infrastructure reasons, nuclear power as an alternative power source is not an option as a speedy and safe response to climate change. It makes more sense to take the billions invested into nuclear reactors and research and invest it in research for technologies that harvest energy from the wind, sun, thermal energy, biomass, tides and waves; solutions that depend on local conditions and local climates. Nuclear energy is seductive because it is a single fix-all to be plugged in anywhere by large entities, such as state ministries and corporations. This one-stop solution is the kind of modernist fix that got us into this mess in the first place. Instead, the far more plausible answer is multi-faceted, geographically-specific, and sensitive to micro-ecological conditions. It will involve not a few corporations led by billionaire visionaries, but a democratized energy grid organized by people in communities who have deep knowledge of historic and ecological conditions in their localities. As they work to power their community locally, they will see the value of conserving, saving, and living perhaps a little more quietly.
Kate Brown is a Professor of Science, Technology and Society at MIT. She is the award-winning author of A Biography of No Place: From Ethnic Borderland to Soviet Heartland; Plutopia: Nuclear Families in Atomic Cities and the Great Soviet and American Plutonium Disasters; and Dispatches from Dystopia: Histories of Places Not Yet Forgotten. She is currently finishing a book, A Manual for Survival, on the environmental and medical consequences of the Chernobyl disaster, to be published by Norton in 2019.
1 Anders Pape Møller, Florian Barnier, Timothy A. Mousseau, “Ecosystems effects 25 years after Chernobyl: pollinators, fruit set and recruitment,” Oecologia(2012) 170:1155–1165.
2 Jarvis, Brooke, “The Insect Apocalypse Is Here,” The New York Times, November 27, 2018, sec. Magazine. https://www.nytimes.com/2018/11/27/magazine/insect-apocalypse.html.