HISTORICALCLIMATOLOGY.COM
  • Home
    • Archived Best of the Web
  • Features
    • Archived Features
  • Interviews
    • Climate History Podcast
  • Projects
  • Resources
    • Tools
    • Databases >
      • CLIWOC
    • Bibliography
    • Videos
    • Links
    • Tipping Points
  • Network
    • On Facebook
  • About
    • Our Team
    • Definitions

Did Colonialism Cause Global Cooling? Revisiting an Old Controversy

2/22/2019

 
Prof. Dagomar Degroot, Georgetown University.
Picture
A fanciful, seventeenth-century depiction of the fall of Tenochtitlan. "Conquista de México por Cortés". Unknown artist, second half of the 17th century. Library of Congress, Washington, DC.

Roughly 11,000 years ago, rising sea levels submerged Beringia, the vast land bridge that once connected the Old and New Worlds. Vikings and perhaps Polynesians briefly established a foothold in the Americas, but it was the voyage of Columbus in 1492 that firmly restored the ancient link between the world’s hemispheres. Plants, animals, and pathogens – the microscopic agents of disease – never before seen in the Americas now arrived in the very heart of the western hemisphere. It is commonly said that few organisms spread more quickly, or with more horrific consequences, than the microbes responsible for measles and smallpox. Since the original inhabitants of the Americas had never encountered them before, millions died.
 
The great environmental historian Alfred Crosby first popularized these ideas in 1972. It took over thirty years before a climatologist, William Ruddiman, added a disturbing new wrinkle. What if so many people died so quickly across the Americas that it changed Earth’s climate? Abandoned fields and woodlands, once carefully cultivated, must have been overrun by wild plants that would have drawn huge amounts of carbon dioxide out of the atmosphere. Perhaps that was the cause of a sixteenth-century drop in atmospheric carbon dioxide, which scientists had earlier uncovered by sampling ancient bubbles in polar ice sheets. By weakening the greenhouse effect, the drop might have exacerbated cooling already underway during the “Grindelwald Fluctuation:” an especially frigid stretch of a much older cold period called the “Little Ice Age."
Picture
Tree growth anomalies relative to the 1000–1099 CE average (blue) with a 30-year running average (black), and European and Arctic summer temperature anomalies relative to the 1961–1990 average (red), from 1000-2000 CE. Adapted using data in Sigl et al. (2015).

Last month, an extraordinary article by a team of scholars from the University College London captured international headlines by uncovering new evidence for these apparent relationships. The authors calculate that nearly 56 million hectares previously used for food production must have been abandoned in just the century after 1492, when they estimate that epidemics killed 90% of the roughly 60 million people indigenous to the Americas. They conclude that roughly half of the simultaneous dip in atmospheric carbon dioxide cannot be accounted for unless wild plants grew rapidly across these vast territories.
 
On social media, the article went viral at a time when the Trump Administration’s wanton disregard for the lives of Latin American refugees seems matched only by its contempt for climate science. For many, the links between colonial violence and climate change never appeared clearer – or more firmly rooted in the history of white supremacy. Some may wonder whether it is wise to quibble with science that offers urgently-needed perspectives on very real, and very alarming, relationships in our present.
 
Yet bold claims naturally invite questions and criticism, and so it is with this new article. Historians – who were not among the co-authors – may point out that the article relies on dated scholarship to calculate the size of pre-contact populations in the Americas, and the causes for their decline. Newer work has in fact found little evidence for pan-American pandemics before the seventeenth century.

More importantly, the article’s headline-grabbing conclusions depend on a chain of speculative relationships, each with enough uncertainties to call the entire chain into question. For example, some cores exhumed from Antarctic ice sheets appear to reveal a gradual decline in atmospheric carbon dioxide during the sixteenth century, while others apparently show an abrupt fall around 1590. Part of the reason may have to do with local atmospheric variations. Yet the difference cannot be dismissed, since it is hard to imagine how gradual depopulation could have led to an abrupt fall in 1590.
 
To take another example, the article leans on computer models and datasets that estimate the historical expansion of cropland and pasture. Models cited in the article suggest that the area under human cultivation steadily increased from 1500 until 1700: precisely the period when its decline supposedly cooled the Earth. An increase would make sense, considering that the world’s human population likely rose by as many as 100 million people over the course of the sixteenth century. Meanwhile, merchants and governments across Eurasia depleted woodlands to power new industries and arm growing militaries.
Changes in the extent and distribution of historical cropland, 3000 BCE to the present, according to the HYDE 3.1 database of human-induced global land use change.

In any case, models and datasets may generate tidy numbers and figures, but they are by nature inexact tools for an era when few kept careful or reliable track of cultivated land. Models may differ enormously in their simulations of human land use; one, for example, shows 140 million more hectares of cropland than another for the year 1700. Remember that, according to the new article, the abandonment of just 56 million hectares in the Americas supposedly cooled the planet just a century earlier!

If we can make educated guesses about land use changes across Asia or Europe, we know next to nothing about what might have happened in sixteenth-century Africa. Demographic changes across that vast and diverse continent may well have either amplified or diminished the climatic impact of depopulation in the Americas. And even in the Americas, we cannot easily model the relationship between human populations and land use. Surging populations of animals imported by Europeans, for example, may have chewed through enough plants to hold off advancing forests. Moreover, the early death toll in the Americas was often also especially high in communities at high elevations: where the tropical trees that absorb the most carbon could not go.

In short, we cannot firmly establish that depopulation in the Americas cooled the Earth. For that reason, it is missing the point to think of the new article as either “wrong” or “right;” rather, we should view it as a particularly interesting contribution to an ongoing academic conversation. Journalists in particular should also avoid exaggerating the article’s conclusions. The co-authors never claim, for example, that depopulation “caused” the Little Ice Age, as some headlines announced, nor even the Grindelwald Fluctuation. At most, it worsened cooling already underway during that especially frigid stretch of the Little Ice Age.

For all the enduring questions it provokes, the new article draws welcome attention to the enormity of what it calls the “Great Dying” that accompanied European colonization, which was really more of a “Great Killing” given the deliberate role that many colonizers played in the disaster. It also highlights the momentous environmental changes that accompanied the European conquest. The so-called “Age of Exploration” linked not only the Americas but many previously isolated lands to the Old World, in complex ways that nevertheless reshaped entire continents to look more like Europe. We are still reckoning with and contributing to the resulting, massive decline in plant and animal biomass and diversity. Not for nothing do some date the “Anthropocene,” the proposed geological epoch distinguished by human dominion over the natural world, to the sixteenth century. 

All of these issues also shed much-needed light on the Little Ice Age. Whatever its cause, we now know that climatic cooling had profound consequences for contemporary societies. Cooling and associated changes in atmospheric and oceanic circulation provoked harvest failures that all too often resulted in famines. In community after community, the malnourished repeatedly fell victim to outbreaks of epidemic disease, and mounting misery led many to take up arms against contemporary governments.​ Some communities and societies were resilient, even adaptive in the face of these calamities, but often partly by taking advantage of the less fortunate. ​Whether or not the New World genocide led to cooling, the sixteenth and seventeenth centuries offer plenty of warnings for our time.

My thanks to Georgetown environmental historians John McNeill and Timothy Newfield for their help with this article, to paleoclimatologist Jürg Luterbacher for answering my questions about ice cores, and to colleagues who responded to my initial reflections on social media. 

Works Cited:

Archer, S. "Colonialism and Other Afflictions: Rethinking Native American Health History." History Compass 14 (2016): 511-21.

Crosby, Alfred W. “Conquistador y pestilencia: the first New World pandemic and the fall of the great Indian empires.” The Hispanic American Historical Review 47:3 (1967): 321-337.

Crosby, Alfred W. The Columbian Exchange: Biological and Cultural Consequences of 1492. Westport: Greenwood Press, 1972. Alfred W. Crosby, Ecological Imperialism: The Biological Expansion of Europe, 900-1900, 2nd Edition. Cambridge: Cambridge University Press, 2004.

Degroot, Dagomar. “Climate Change and Society from the Fifteenth Through the Eighteenth Centuries.” WIREs Climate Change Advanced Review. DOI:10.1002/wcc.518

Degroot, Dagomar. The Frigid Golden Age: Climate Change, the Little Ice Age, and the Dutch Republic, 1560-1720. New York: Cambridge University Press, 2018.

Gade, Daniel W. “Particularizing the Columbian exchange: Old World biota to Peru.” Journal of Historical Geography 48 (2015): 30.

Goldewijk, Kees Klein, Arthur Beusen, Gerard Van Drecht, and Martine De Vos, “The HYDE 3.1 spatially explicit database of human‐induced global land‐use change over the past 12,000 years.” Global Ecology and Biogeography 20:1 (2011): 73-86.

Jones, Emily Lena. “The ‘Columbian Exchange’ and landscapes of the Middle Rio Grande Valley, AD 1300– 1900.” The Holocene (2015): 1704.

Kelton, Paul. "The Great Southeastern Smallpox Epidemic, 1696-1700: The Region's First Major Epidemic?". In R. Ethridge and C. Hudson, eds., The Transformation of Southeastern Indians, 1540-1760. 

Koch, Alexander, Chris Brierley, Mark M. Maslin, and Simon L. Lewis. “Earth system impacts of the European arrival and Great Dying in the Americas after 1492.” Quaternary Science Reviews 207 (2019): 13-36

McCook, Stuart. “The Neo-Columbian Exchange: The Second Conquest of the Greater Caribbean, 1720-1930.” Latin American Research Review 46: 4 (2011): 13.
​

McNeill, J. R. “Woods and Warfare in World History.” Environmental History, 9:3 (2004): 388-410.

Melville, Elinor G. K. ​A Plague of Sheep: Environmental Consequences of the Conquest of Mexico. Cambridge: Cambridge University Press, 1997.

PAGES2k Consortium, “A global multiproxy database for temperature reconstructions of the Common Era.” 
Scientific Data 4 (2017). doi:10.1038/sdata.2017.88.

Parker, Geoffrey. Global Crisis: War, Climate Change and Catastrophe in the Seventeenth Century. New Haven: Yale University Press, 2013. 

Sigl, Michael et al., "Timing and climate forcing of volcanic eruptions for the past 2,500 years." Nature 523:7562 (2015): 543.

Riley, James C. "Smallpox and American Indians Revisited." Journal of the History of Medicine and Allied Sciences 65 (2010): 445-77. 

Ruddiman, William. “The Anthropogenic Greenhouse Era Began Thousands of Years Ago.” Climatic Change 61 (2003): 261–93.

Ruddiman, William. Plows, Plagues, and Petroleum: How Humans Took Control of Climate. Princeton, NJ: Princeton University Press, 2005

Williams, Michael. Deforesting the Earth: From Prehistory to Global Crisis. Chicago: University of Chicago Press., 2002. 

Next Generation Nuclear?

2/13/2019

 
Prof. Kate Brown, MIT

This is the second post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, andActiveHistory.ca.
Picture
Climate change is here to stay. So too for the next several millennia is radioactive fallout from nuclear accidents such as Chernobyl and Fukushima. Earthlings will also live with radioactive products from the production and testing of nuclear weapons.  The question as to whether next generation technologies of nuclear power plants will be, as their promoters suggest, “perfectly safe” appears to decline in importance as we consider the catastrophic outcome of continued use of carbon-based fuels. Sea levels rising 10 feet, temperatures warming 3 degrees Celsius, tens of millions of climate refugees on the move. These predicted climate change catastrophes make nuclear accidents such as the 1986 Chernobyl accident look like a tiny blip in planetary time.

Or maybe not. It is hard to compare an event in the past to one in the future that has not yet occurred. I have found researching for the past four years the medical and environmental history of the Chernobyl disaster that the health consequences were far greater than has been generally acknowledged. Rather than 35 to 54 fatalities recorded by UN agencies, the count in Ukraine alone (which received the least amount of radioactive fallout of the three affected Soviet republics) ranges between 35,000 and 150,000 fatalities from exposures to Chernobyl radioactivity. Instead of 200 people hospitalized after the accident, my tally from the de-classified archives is at least 40,000 people in the three most affected republics just in the summer months following the disaster.

We don’t have to focus just on human health to worry about the future of humans on earth. Following biologists around the Chernobyl Zone the past few years, I learned that in the most contaminated territories of the Chernobyl Zone radioactivity has knocked out insects and microbes that are essential for the job of decomposition and pollination. Biologists Tim Mousseau and Anders Møller found radical decreases in pollinators in highly contaminated areas; the fruit flies, bees, butterflies and dragonflies were decimated by radioactivity in soils where they lay their eggs. They found that fewer pollinators meant less productive fruit trees. With less fruit, fruit-eating birds like thrushes and warblers suffered demographically and declined in number. With few frugivores, fewer fruit trees and shrubs took root and grew. The team investigated 19 villages in a 15-kilometer circle around the blown plant and found that just two apple trees had seed in two decades after the 1986 explosion.?1 The loss of insects, especially pollinators, we know, spells doom for humans on earth.?2 There are, apparently, many ways for our species to go extinct. Climate change is just one possibility.

Since Chernobyl, fewer corporations have been interested in building and maintaining nuclear power plants. In the past few decades, the cycle of nuclear power—building, maintaining, disposing of waste, and liability—has proven economically unfeasible and is winding down. Faced with intractable problems, regulations on classifying and cleaning up waste are being watered down. Westinghouse, the last U.S builder of nuclear reactors, went bankrupt in 2017. It was bought out and struggles to complete orders for its AP1000 reactors. Now China and Russia are the main producers of reactors for civilian power. We don’t know much about China’s nuclear legacy. We know Russia’s safety record is dismal. Meanwhile, in most countries with nuclear reactors, an aging population of nuclear power operators, nuclear physicists, and radiation monitors is not being replaced by a younger generation.

Probably the greatest obstacle to backing nuclear power as an alternative fuel is that we have run out of time. The long promised fusion reactors promoted with the billion-dollar might of the likes of Bill Gates and Jeff Bezos are still decades in the future. Roy Scranton estimates in Learning to Die that we would have to have on line 12,000 new conventional nuclear power reactors in order to replace petro-carbon fuels. It takes a decade or two to build a reactor. Conventional and fusion reactors would come on line at a time when the major coastal cities they would power are predicted to be underwater.
​
In short, for a host of economic and infrastructure reasons, nuclear power as an alternative power source is not an option as a speedy and safe response to climate change. It makes more sense to take the billions invested into nuclear reactors and research and invest it in research for technologies that harvest energy from the wind, sun, thermal energy, biomass, tides and waves; solutions that depend on local conditions and local climates. Nuclear energy is seductive because it is a single fix-all to be plugged in anywhere by large entities, such as state ministries and corporations. This one-stop solution is the kind of modernist fix that got us into this mess in the first place. Instead, the far more plausible answer is multi-faceted, geographically-specific, and sensitive to micro-ecological conditions. It will involve not a few corporations led by billionaire visionaries, but a democratized energy grid organized by people in communities who have deep knowledge of historic and ecological conditions in their localities. As they work to power their community locally, they will see the value of conserving, saving, and living perhaps a little more quietly.

Kate Brown is a Professor of  Science, Technology and Society at MIT.  She is the award-winning author of A Biography of No Place: From Ethnic Borderland to Soviet Heartland; Plutopia: Nuclear Families in Atomic Cities and the Great Soviet and American Plutonium Disasters; and Dispatches from Dystopia: Histories of Places Not Yet Forgotten. She is currently finishing a book, A Manual for Survival, on the environmental and medical consequences of the Chernobyl disaster, to be published by Norton in 2019. 

​
1 Anders Pape Møller, Florian Barnier, Timothy A. Mousseau, “Ecosystems effects 25 years after Chernobyl: pollinators, fruit set and recruitment,” Oecologia(2012) 170:1155–1165.

2 Jarvis, Brooke, “The Insect Apocalypse Is Here,” The New York Times, November 27, 2018, sec. Magazine. https://www.nytimes.com/2018/11/27/magazine/insect-apocalypse.html.

Closing Nuclear Plants Will Increase Climate Risks

1/30/2019

 
Prof. Nancy Langston, Michigan Tech
This is the first post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, and ActiveHistory.ca.

​On March 28, 1979, I woke up late and rushed to catch the bus to my suburban high school in Rockville MD. So it wasn't until I found my friends clustered around the radio in the cafeteria that I learned seventy-seven miles upwind of us, Three Mile Island Reactor Unit 2 was in partial meltdown. 
Picture
Three Mile Island, Reactor Unit 2. Credit: Rowen’s Photography, (Creative Commons CC BY-ND 2.0).

Two months after the disaster, when the containment of its radioactivity was still in dispute, I was chosen as a finalist for a National Science Foundation (NSF)-sponsored competition to showcase emerging young scientists. The prize was a tour of Australia, where we were expected to promote the stellar safety record and wondrous technology of the U.S. nuclear program. The timing wasn't perfect, to put it mildly. At the finalists' interview, I ended up in a lively argument with the NSF judges when they told me that the public's nuclear anxieties were irrational, and I replied that NSF's certainties of safety were even more irrational, given the measurable risks of a meltdown and the failure of the U.S. to promote energy conservation as an alternative.
 
To no one's surprise, I was not chosen to represent America in that summer's nuclear wonders tour. Instead, I marched against nuclear power. When the movie China Syndrome came out the following spring, all my worst suspicions about nuclear risks found fictional confirmation.
 
Four decades later I now teach the problematic history of nuclear power. Students use the emerging field of discard studies to explore the structural context of a society that creates vast volumes of toxic waste, designating certain landscapes as sacrifice zones. We turn to Traci Voyles' insights in Wastelanding to understand the appalling history of uranium mining, exploring how the Dine (Navaho) were made into disposable peoples by the nuclear mining industry. [1] We watch a few of the "Duck and Cover" movies from 1950s to show how an enormous gap developed between potential nuclear hazards and possible individual responses. [2] When we examine the three major disasters in the history of nuclear energy—Three Mile Island, Chernobyl, and Fukushima—we use Diane Vaughan's concept of "the normalization of deviance" to explore the ways "disasters are socially organized and systematically produced by social structures” in high risk industries.[3] After glancing at the risks of nuclear proliferation and terrorism, we finally turn to the challenges of high level waste transport and storage.
 
This is hardly an eco-modernist paean to the promise of nuclear power. I sound less like Robert Stone in his 2013 pro-nuclear documentary Pandora's Promise and much more like the younger Robert Stone in his 1988 documentary Radio Bikini, which focuses on the horrors of nuclear weapons testing and fallout. [4]
Picture
Mushroom cloud, Ivy Mike. U.S. nuclear weapon test MIKE of Operation Ivy, 31 Oct 1952, the first test of a thermonuclear weapon (hydrogen bomb). Credit: National Nuclear Security Administeration Nevada Site Office Photo Library IVY-52-05.

By the end of the segments on nuclear, my students fully expect me to call for an end to nuclear power. But I do the opposite: I call for continuing, not shuttering, nuclear power plants. Why? Because the risks of climate change are overwhelmingly greater than the risks of all stages of the nuclear cycle combined. I am convinced that to have a chance of avoiding the existential threat of runaway climate change, we must keep the globe's clunky, aging, awkwardly designed 451 nuclear reactors limping along for the foreseeable future. Until renewables have replaced all existing fossil fuels, closing aging nuclear plants would mean game over for keeping warming to less than 2º C. [5] To paraphrase Winston Churchill's comments on democracy: existing forms of nuclear power are the worst form of non-renewable energy—except for all the other forms ever yet tried.

To meet the objectives of the Paris Agreement, global CO₂ emissions need to decline rapidly as possible, reaching net-zero emissions sometime after 2050. We also need to remove CO₂ from the atmosphere at scale. The problem? We are accelerating in the wrong direction. A recent boom in coal and natural gas, and a recent shuttering of nuclear plants, means that while carbon emissions leveled off briefly in the mid 2010s, they are increasing again. [6]
Picture
Global carbon budget 2018. Credit: Graphic by Nigel Hawtin, www.globalcarbonbudget.org, (Creative Commons CC BY).

Yes, there's some good news in solar and wind, which are growing exponentially as prices drop. Energy prices from utility-scale solar plants have dropped 86% in the past decade, and new solar now costs $50/mWhr, less than half the cost of coal. [7] But renewables are not scaling up quickly enough for the globe to reach zero emissions by 2030. Remarkable as their growth has been, it has not offset the growth in coal, oil, and gas use over the same time—much less replaced existing fossil fuels. Microgrid and battery technologies may be advanced enough within several decades to replace 100% of our energy needs, but right now we need more than renewables in our zero-emissions energy portfolio to control climate change. When renewables have replaced all existing fossil fuels in power production, that's the time to consider closing existing nuclear plants.
In the U.S. right now, nuclear plants are our largest source of zero-emissions power, “producing about 60 % of zero-emission electricity and approximately 20 % of total electricity.“ [8] Globally, if nuclear were shut down, we would emit an additional 2.5 billion metric tons of CO2 each year. [9] That's a lot of CO2.

Since 2013, competition from cheap natural gas—and lack of an effective price on carbon—has led to the closure of five nuclear plants in the US. Six more plants are scheduled for closure by 2025 (although they could operate for decades longer, and they would be cost-effective if we priced the negative externalities of fossil fuel pollution with a carbon fee). These six plants generated nearly 60 million megawatt hours in 2017. That's more than all of U.S. solar panels combined. If those six plants close, domestic CO2 emissions will increase nearly 5%, erasing all recent climate gains from last decade's decline of coal. Here's another way to think about it: if we close just one single aging nuclear plant, Pennsylvania's notorious Three Mile Island, that will mean losing more zero-carbon power than all of the state’s renewable resources—solar, wind, geothermal, and hydro—put together.

Retiring nuclear plants in the U.S. without increasing carbon emissions would require a massive transformation of transport sector, for example. If we were to retire U.S. nuclear plants, engineer Elizabeth Ervin estimates that 98.5% of our passenger cars on the road—134 million of them—would have to be eliminated to keep U.S. carbon dioxide emissions from increasing. [10] As editorial staff for the Boston Globe calculated for Massachusetts, in 2017 "solar panels and wind turbines generated less than 5 % of the utility-scale electricity" generated in the state. If the 680 Megawatt Pilgrim reactor is closed as scheduled in 2019, that would "remove in one day more zero-emission electricity production than all the new windmills and solar panels Massachusetts has added over the last 20 years." [11]

When nuclear plants have shut in recent years, fossil fuel emissions have increased. After Southern California Edison retired two reactors at the San Onofre nuclear plant in 2013, California electricity sector emissions rose 24% the next year (Plummer 2016; Kern 2016). When Vermont Yankee closed in 2014, CO2 emissions for the state electricity sector rose 5%. After Fukushima, Japan began shutting down some reactors, their carbon emissions increased nearly 10 %. Germany retired 8 of its 17 reactors after Fukushima, and the decline in its emissions quickly came to a halt. Germany's emissions increased from 2012-2013, fell in 2014, but increased again in 2015. Even with a sustained commitment to bringing new solar and wind online, Germany's decision to shut nuclear plants undermined its climate efforts. [12]

Some states, such as California, have negotiated agreements to ensure that nuclear energy is replaced only with renewables. But that does not eliminate the climate hit from closing nuclear plants. In California, "Pacific Gas and Electric has announced that it plans to replace Diablo Canyon with zero-emitting resources, primarily renewables and energy efficiency. The utility has about eight years to prepare for these replacements.” [13] Electricity sector emissions won't go up, but because those renewables are replacing other zero-emissions energy sources rather than high emissions energy sources, California will still be further from meeting its essential goal of zero-emissions energy. Substituting one zero-emissions source with another does nothing to slow climate change.

Comparative Risks

Radiation is indeed frightening. In ordinary operation, coal plants release 100 times more radiation than the equivalent nuclear reactor—but it’s not ordinary operation that folks are concerned about; it's the risk of a meltdown. Risk is worth interrogating more closely, however. Risk is not just how scary something is. It's defined as hazard (the harm from something) times probability (the chance of that something happening). For example, the hazard of mutant zombies chewing our faces off is vast, but the probability (one trusts) is zero, meaning that the zombie risk is zero.

The hazard of a full nuclear meltdown—defined as core damage from overheating—is indeed very high, but not as high as most of my students imagine. When I ask my students what would happen if one of our three nuclear power stations here in Michigan went into full meltdown, they make wild guesses: "the complete eradication of all biodiversity in North America? Our state uninhabitable for the next 100,000 years?" These are way off. To better evaluate the hazard from a meltdown, we look at the worst case scenarios calculated for Fukushima. If TEPCO had entirely evacuated the Daiichi plant during the disaster, full meltdowns at all 4 reactors would have occurred, and if that had necessitated the evacuation of personnel from neighboring nuclear plants, those could have melted down as well. The U.S. military and State Department modeled these worst case scenarios during the crisis, because they had to figure out which people should be evacuated in case the worst happened. Doubting TEPCO and the Japanese government's figures, they modeled even worse possibilities, calculating how far radiation could have travelled, given prevailing winds. While media speculation at the time centered on the potential evacuation of Tokyo, the U.S. modelers calculated that even if the worst case scenario had come true, "there was no plausible scenario in which Tokyo, Yokosuka, or Yokota could be subject to dangerous levels of airborne radiation." As Jeffrey Bader explains in Foreign Affairs, Lawrence Livermore National Laboratory modeled "simultaneous meltdowns at one or more reactors and complete drainage of the spent fuel pools at two reactors. The results for such worst-case scenarios, assuming unfavorable wind patterns from the reactor site and a lack of precipitation, suggested that radioactive plumes in excess of EPA standards would not reach within 75 to 100 miles of Tokyo." [14]
Picture
Chernobyl radiation map 1996. Credit: CIA handbook, 1996, (Creative Commons CC BY-SA 2.5), via Wikimedia Commons.

Yes, that represents an enormous hazard, even if it doesn't mean that entire continents would be rendered uninhabitable. Chernobyl was an even worse disaster, magnified by poor Soviet nuclear design and worse maintenance. The 18-mile radius exclusion zone around Chernobyl is still off limits for permanent residents, and the Zone of Alienation has reached 1000 miles. 70% of the fallout landed on Belarus, contaminating 25% of the country. People who work in the exclusion zone must rotate in and out to limit radiation exposure, and extremely toxic hot spots persist. [15] Entire continents may not be rendered uninhabitable by a meltdown, but the hazards are terrible for those exposed, cast out from their homes, suffering possible radiation-related illnesses.

But while these hazards are high, the probability of them recurring is much lower than the probability of climate change. The nuclear industry has experienced 3 partial meltdowns in 17,000 cumulative reactor-years of commercial operation, which translates to a 0.018% probability of any given reactor having a serious accident in any given year. That's a significant probability for something with such a high hazard, which means the risk is real. The industry is very good at responding to historical disasters and designing new safety systems to lessen the risk of the same accident occurring twice, but probably less good at anticipating new things that can and will go wrong in such complex systems. These are real risks, and dismissing them as negligible is not persuasive to most people—certainly not to the 81% of Mexicans who oppose nuclear energy production, or the 63% of Canadians who oppose it, or the 48% of Americans who oppose it. [16]
Picture
Opposition to Nuclear Energy. Credit: Hannah Ritchie, Our World in Data, 2017 https://ourworldindata.org/what-is-the-safest-form-of-energy, (Creative Commons CC BY-SA).

Concerns about meltdowns are matched by anxiety about waste storage for many people who oppose nuclear. Long-term waste storage for high level nuclear waste is a huge cost issue. Technical solutions exist for the containment of high-level waste, as Finland's current project to build the world's first high-level, long-term waste storage facility shows—but most countries have been unwilling to pay the necessary costs. Finland's project costs more than $5 billion. But Finland has a negative externalities law for industries, so the company pays, not the public. In comparison, the environmental and health costs of coal mining in the U.S. alone—which the companies do not have to pay for— are at least $345 billion/year, dwarfing the costs of long term nuclear storage. [17]

To make intelligent decisions about energy portfolios in the context of climate change, we need to compare nuclear's risks to the risks of coal pollution & climate deaths. Consider Michigan, I tell my students, most of whom are from the state. In 2015, 30% of our electricity came from nuclear and 50% from coal. Do these coal plants present risks as great as our nuclear plants? Students typically assume not. They guess that historically, global deaths from nuclear disasters and radiation exposure have been much higher than global deaths from coal. But they are off by 400-fold. For every person that has ever died in a nuclear accident or from long term radiation exposure, more than 400 have died from coal. [18]
Picture
Death rates from nuclear energy production. Credit: Hannah Ritchie, Our World in Data, 2017 https://ourworldindata.org/what-is-the-safest-form-of-energy, (Creative Commons CC BY-SA).

Coal represents a kind of "slow violence," in Rob Nixon's evocative phrase, so it's largely underestimated. [19] Because there's not a single crippling accident that captures the world's attention, the hazards of coal combustion are often invisible to most Americans. But they are enormous. Nine million people died premature deaths from pollution in 2015, mostly from air pollution. 85% of that airborne pollution comes from fossil fuel and biomass combustion, mostly from coal. Coal kills millions each and every year—even ignoring the risks of climate change. [20] One way to run these numbers is to compare mortality rates per trillion kWhr of energy production (this includes deaths, both direct and indirect, from Chernobyl and Fukushima, using the highest estimates of deaths from radiation exposure). James Conca's analysis compares the figures for different countries, and show that coal from China (75% of China's electricity) has a mortality rate of 170,000 deaths per trillion kWhr. Roof top solar sees 440 deaths (installers fall off roofs) and wind turbines cause 150 deaths. In the United States, nuclear has led to 0.1 deaths per trillion kWhr. Worldwide, including Chernobyl and Fukushima, indirect deaths from radiation exposure and increased cancer risk including direct deaths, nuclear's history has witnessed 90 deaths per trillion kWhr of energy production.

Indigenous peoples have disproportionately borne the risks from mining and processing of uranium. But shutting down nuclear wouldn't solve the environmental justice problem, because continued reliance on fossil fuels also disproportionately hurts the poor. Globally, 92% of the 9 million people who died from pollution in 2015 lived in lower income countries or communities. [21] Senator Cory Booker noted: "My city [Newark] has asthma rates for our children that are epidemic, about three to four times what they are in other communities. I know what the urgencies are here in the immediate, right now … I also know we can't get there unless we substantially support and even embolden the nuclear energy sector. We've got to support the existing fleet." [22] When comparing risks from fossil fuels versus other energy sources, we also need to factor in the risks from runaway climate change. These are harder to measure with certainty. One estimate figures 14 million additional deaths from heat-related illness alone if temperatures rise 4º—which is what will happen if we continue business as usual. [23] Another study estimates more than half a million excess deaths from reduced food production alone, by 2050. [24]

The worst-case scenario? Take a look at the end-Permian mass extinctions 252 million years ago. Emissions of large amounts of CO2 led to the extinction of 90% of life in the ocean and 75% on land. As Peter Brannen writes, "Today the consequence of quickly injecting huge pulses of carbon dioxide into the air is discussed as if the threat exists only in the speculative output of computer models. But, as scientists have discovered, this has happened many times before, and sometimes the results were catastrophic." We are now releasing CO2 at 10 times the rate that sparked the end-Permian. The hazard of runaway climate change? Existential. The probability? Unfortunately, it's higher than the probability of another nuclear reactor meltdown, given the fact that our greenhouse gas emissions are exponentially increasing.
​
In the future, we hope that conservation programs will lead to a significant reduction in power demands, allowing renewables with batteries for backup and microgrids for resilience to supply the globe's power needs. But even that hopeful vision isn't fully sustainable, because renewables involve significant mining of non-renewable resources. Every energy source—include solar, wind, and geothermal—creates mining waste and greenhouse gas emissions from the full life cycle (which includes mining, processing, transport, energy production, and waste storage). Life cycle analyses show that coal generates 1000 grams of CO2 per kWh. Solar generates 58 grams—much less than coal, but more than wind and nuclear at 5 grams of CO2 per kWh. [25] Even conservation, laudable as it is, has a mining and greenhouse gas footprint, because it typically involves the production of foam or cellulose insulation, which includes some polystyrene with all of its associated plastic ills.
Picture
Comparison of Lifecycle Greenhouse Gas Emissions of Various Electricity Generation Sources. Credit: World Nuclear Association.

​Nuclear is not classified as renewable because its energy source is mined; but does it really require more mining per kWh of energy produced than solar or wind? I didn't have time to track down these figures, but it's worth considering the question. If we use the carbon footprint of mining as a rough proxy for disturbance from mining, then nuclear is on par with wind, and less problematic than coal and natural gas. My broader point is that assigning simple categories to energy sources such as “renewable vs. non-renewable" or "sustainable vs. non-sustainable" is problematic. Every energy source involves the mining of non-renewable resources such as copper, nickel, and zinc. Every energy source creates some greenhouse gas emissions during its full life cycle. But wind, solar, geothermal, and nuclear are orders of magnitude below fossil fuels, and controlling runaway climate change requires all of them right now, with renewables scaling up as quickly as possible and nuclear giving us time for that to happen. 

Thorium and Next-Gen Plant Designs

While I'm no eco-modernist, I am intrigued by emerging technologies such as thorium and next-gen plants that use a cradle-to-cradle design philosophy to re-imagine used fuel not as waste, but rather as a generative source of power for new energy. Yes, light water reactors (LWR) can be made much safer than the older designs now in use. But no amount of tweaking will overcome the fact that light water reactors, which rely on water to prevent meltdowns, are inherently poor designs for commercial energy production. They were designed for nuclear submarines, where a water-cooled design had a fail-safe backup in case of power failure.

Far better, safer fuels exist for nuclear plants, such as thorium. Thorium is an element abundant in the U.S. and Canada, and while radioactive like uranium, it presents far fewer mining, power generation, and waste storage risks. 99% of mined thorium can produce energy, compared to uranium, of which only 1% creates energy and the other 99% becomes radioactive waste. One ton of thorium can produce as much energy as 35 tons of uranium. What all this means is that there are two orders of magnitude less radioactive waste from thorium than from uranium. Meltdown risks are negligible, because thorium-fueled molten salt reactors are self-cooling in case of disaster, not relying on water to stop a meltdown. [26] Spent fuel cannot be weaponized easily or seized by terrorists. Thorium, in other words, eliminates many of the potential hazards from conventional nuclear.

So why aren't we using this miracle fuel for nuclear plants? Uranium is a legacy of the Cold War. We once had a functional thorium reactor developed at the Oak Ridge National Laboratory during the late 1960s. It ran for five years "before being axed by the Nixon administration. The reason for its cancellation: it produced too little plutonium for making nuclear weapons. Today, that would be seen as a distinct advantage. Without the Cold War, the thorium reactor might well have been the power plant of choice for utilities everywhere." [27] At the time, one key purpose of our nuclear energy program was to supply our nuclear weapons program. Now of course, thorium's Cold War bug has become its feature, but we are woefully behind in thorium research. The Netherlands started up a proof-of-concept thorium reactor in 2017, the first one in several decades, and both India and China are busy researching new designs. Significant research and regulatory hurdles remain before thorium reactors can replace uranium reactors, so like scaled-up solar and wind storage, it's not going to be online soon enough to prevent 2º of warming.
Picture
The molten salt reactor experiment fueled by thorium, 1964. Credit: Oak Ridge National Laboratory, ORNL Photo 67051-64.

Over the longer haul, thorium might give us a little breathing space to make fundamental political and structural changes, but it doesn't obviate the need for those changes. To illustrate this, I show my students the opening clips of Okkupert, a Norwegian TV series that starts with a hurricane powered by climate change that kills thousands of Norwegians. Shocked, the citizens elect as Prime Minister the leader of the Green Party, and he promptly shuts down the flow of North Sea oil. To provide Norway's domestic energy needs, he opens a nuclear plant powered by thorium. Cutting off the flow of oil destabilizes the power relations that have developed around North Sea oil. Russia invades Norway to turn the oil back on, and the EU and the U.S. simply watch, eager for their share. [28] New energy sources don't erase generations of political balancing acts around fossil fuels, in other words. Politics don't vanish when the fossil fuels get turned off.  

Conservation has enormous potential to help us reduce fossil fuel emissions, but on its own, it won't get us to where we need to be fast enough. As recently as 2012, more than 1 billion people lacked access to electricity—15% of the world's population. 41% of the world's population, or 2.9 billion people, lacked access to safe cooking fuels. [29] Electricity provision helps ensure a host of other goals: education for girls, independent incomes, safer indoor air, tangible health benefits, etc. Even with a 50% reduction of energy use in more-developed nations such as the U.S. where nearly 66% of energy is wasted, global energy demands will continue to rise. In the US, according to the Sankey diagrams of energy flows produced by Lawrence Livermore Laboratory, there were 66.7 quads (one quadrillion BTUs) of "rejected energy" in 2017 compared to 31.1 quads of energy services. This means that we're wasting 68% of the energy produced, particularly in transportation (where 79% of energy is wasted) and electricity generation (where 66.4% of energy is wasted). [30]

​The second law of thermodynamics means that some energy will always be wasted—but nonetheless, there's enormous scope for effective conservation in the US. And again, we are heading in the wrong direction: in 1970, Americans wasted 49% of energy, far less than we waste today. [31] The problem is structural, not personal behavior. It’s not just that we're doing a poor job insulating our houses or changing to LED lightbulbs. As David Roberts points out, "at a deeper level, waste is all about system design. The decline in overall efficiency in the U.S. economy mainly has to do with the increasing role of inefficient energy systems. Specifically, the years since 1970 have seen a substantial increase in electricity consumption and private vehicles for transportation, two energy services that are particularly inefficient." [32]
Picture
Sankey Diagram of energy flows and rejected energy, 1970 and 2017. Credit: Lawrence Livermore National Laboratory.

​Better designed urban systems, better electricity systems, better transport systems: all will go a long way toward decreasing carbon emissions. New solar-mega farms, like the one being constructed in Morocco that will eventually power a million households, will someday help us meet the globe's new energy needs, possibly with the help of thorium.  Until that day, maintaining existing nuclear plants, problematic as they are, is essential.

Prof. Nancy Langston is an environmental historian who explores the connections between toxics, environmental health, and industrial changes in Lake Superior and other boreal watersheds.


[1] Traci Brynne Voyles, Wastelanding: Legacies of Uranium Mining in Navajo Country,  (Minneapolis: Univ Of Minnesota Press, 2015).

[2] Nuclear Vault, Duck And Cover (1951) Bert The Turtle https://www.youtube.com/watch?v=IKqXu-5jw60.

[3] Diane Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA,  (Chicago: University of Chicago Press, 2016).

[4] Rens Van Munster and Casper Sylvest, “Pro-Nuclear Environmentalism: Should We Learn to Stop Worrying and Love Nuclear Energy?,” Technology and Culture 56, no. 4 (2015): 789–811.

[5] Eric Holthaus, “It’s Time to Go Nuclear in the Fight against Climate Change,” Grist, January 12, 2018, https://grist.org/article/its-time-to-go-nuclear-in-the-fight-against-climate-change/.

[6] Bready Dennis and Chris Mooney, “‘We Are in Trouble.’ Global Carbon Emissions Reached a Record High in 2018,” Washington Post, December 5, 2018, https://www.washingtonpost.com/energy-environment/2018/12/05/we-are-trouble-global-carbon-emissions-reached-new-record-high/.

[7] Jeremy Berke, “One Simple Chart Shows Why an Energy Revolution Is Coming — and Who Is Likely to Come out on Top,” Business Insider, May 8, 2018, https://www.businessinsider.com/solar-power-cost-decrease-2018-5.

[8] Rebecca Kern, “As U.S. Nuclear Plants Close, Carbon Emissions Could Go Up” (Bloomberg Environment & Energy Report, 2016), https%3A%2F%2Fwww.bna.com%2Fus-nuclear-plant-n73014445640%2F.

[9] “Environmental Impacts,” accessed December 3, 2018, https://sites.psu.edu/jensci/2014/02/26/environmental-impacts/.

[10] Elizabeth Ervin, “Nuclear Energy Statistics,” accessed December 3, 2018, https://www.google.com/search?q=mining+required+per+MW+of+energy+produced+by+nuclear&oq=mining+required+per+MW+of+energy+produced+by+nuclear&aqs=chrome..69i57.8179j0j4&sourceid=chrome&ie=UTF-8.

[11] Editorial staff, “Retiring More Nuclear Plants Could Hurt Mass. Climate Goals - The Boston Globe,” Boston Globe, June 2, 2018, https://www.bostonglobe.com/opinion/editorials/2018/06/02/retiring-more-nuclear-plants-could-hurt-mass-climate-goals/z0PRjeQPr0TIVBtsYB7rpI/story.html.

[12] Kern, “As U.S. Nuclear Plants Close, Carbon Emissions Could Go Up.”

[13] Kern.

[14] Jeffrey A. Bader, “Inside the White House During Fukushima,” Foreign Affairs, March 8, 2012, https://www.foreignaffairs.com/articles/americas/2012-03-08/inside-white-house-during-fukushima.

[15] Luke Spencer, “12 Facts About Chernobyl’s Exclusion Zone 30 Years After the Disaster,” Mental Floss, April 26, 2016, http://mentalfloss.com/article/78779/12-facts-about-chernobyls-exclusion-zone-30-years-after-disaster.

[16] Hannah Ritchie, “It Goes Completely against What Most Believe, but out of All Major Energy Sources, Nuclear Is the Safest,” Our World in Data (blog), July 24, 2017, https://ourworldindata.org/what-is-the-safest-form-of-energy.

[17] Henry Fountain, “Finland Works, Quietly, to Bury Its Nuclear Reactor Waste,” The New York Times, June 9, 2017, https://www.nytimes.com/2017/06/09/science/nuclear-reactor-waste-finland.html.

[18] Ritchie, “It Goes Completely against What Most Believe, but out of All Major Energy Sources, Nuclear Is the Safest.”

[19] Rob Nixon, Slow Violence and the Environmentalism of the Poor (Cambridge, Mass: Harvard University Press, 2011).

[20] Philip J. Landrigan et al., “The Lancet Commission on Pollution and Health,” The Lancet 391, no. 10119 (February 3, 2018): 462–512, https://doi.org/10.1016/S0140-6736(17)32345-0.

[21] Landrigan et al.

[22] James Conca, “How Deadly Is Your Kilowatt? We Rank The Killer Energy Sources,” Forbes, June 10, 2012, https://www.forbes.com/sites/jamesconca/2012/06/10/energys-deathprint-a-price-always-paid/.

[23] Greg Ip, “Adding Up the Cost of Climate Change in Lost Lives,” Wall Street Journal, August 1, 2018, sec. Economy, https://www.wsj.com/articles/adding-up-the-cost-of-climate-change-in-lost-lives-1533121201.

[24] Marco Springmann et al., “Global and Regional Health Effects of Future Food Production under Climate Change: A Modelling Study,” The Lancet 387, no. 10031 (May 7, 2016): 1937–46, https://doi.org/10.1016/S0140-6736(15)01156-3.

[25] MIT Energy, “The Future of Nuclear Energy in a Carbon-Constrained World,” 2018.
​
[26] Nicolas Cooper et al., “Should We Consider Using Liquid Fluoride Thorium Reactors for Power Generation?,” Environmental Science & Technology 45, no. 15 (August 1, 2011): 6237–38, https://doi.org/10.1021/es2021318.

[27] Babbage, “The Nuke That Might Have Been,” The Economist, November 11, 2013, https://www.economist.com/babbage/2013/11/11/the-nuke-that-might-have-been.

[28] Erik Skjoldbjærg, “Okkupert/Occupied,” 2015, https://www.netflix.com/title/80092654.

[29] World Bank, “Sustainable Development Goal on Energy (SDG7) and the World Bank Group,” World Bank, May 26, 2016, http://www.worldbank.org/en/topic/energy/brief/sustainable-development-goal-on-energy-sdg7-and-the-world-bank-group.

[30] Lawrence Livermore National Laboratory, “LLNL Flow Chart 2017 and 1970,” https://flowcharts.llnl.gov/.

[31] David Roberts, “American Energy Use, in One Diagram,” Vox, April 17, 2017, https://www.vox.com/energy-and-environment/2017/4/13/15268604/american-energy-one-diagram.

[32] Roberts.

Environmental Historians Debate: Can Nuclear Power Solve Climate Change?

1/25/2019

 
Professors Jim Clifford, Dagomar Degroot, and Daniel Macfarlane.

This is the introductory post to a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?”. It is hosted by the Network in Canadian History and Environment, the Climate History Network, and ActiveHistory.ca.
Picture
The first light bulbs ever lit by electricity generated by nuclear power at EBR-1 at Argonne National Laboratory-West, December 20, 1951.

Is nuclear power a saving grace - or the next step in humanity’s proverbial fall from grace?

This series focuses on what environmental and energy historians can bring to discussions about nuclear power. It is a tripartite effort between Active History, the Climate History Network (CHN), and the Network in Canadian History and Environment (NiCHE), and will be cross-posted across all three platforms. Reflecting this hydra-headed approach, this series is co-edited by a member of each of those websites: Jim Clifford (Active History), Dagomar Degroot (CHN/HistoricalClimatology.com), and Daniel Macfarlane (NiCHE).

Why a series on historians, nuclear power, and the future? After all, predicting the future is pretty much a fool’s errand, and one that historians tend to avoid. But this isn’t so much about prognosticating what is to come as using the knowledge and wisdom of history to inform dialogue about the present and future.

It all started on Twitter, as these things often do. Daniel Macfarlane was tweeting back and forth with Sean Kheraj about some energy history books they had recently read. Daniel was lamenting that one ended with an arrogant screed about how nuclear energy was the only hope for the future, and anyone who didn’t think so was deluded. This led them to wonder - on Twitter, mind you - what environmental historians, and those who studied energy history in particular, thought of nuclear energy’s prospects.

Some other scholars, many of whom will be represented in this series (Dagomar Degroot, Andrew Watson, Nancy Langston, Robynne Mellor), began chiming in online. The exchanges remained very collegial, but it was clear that there were some sharply diverging positions. This mirrored the stark divides one often finds among environmentalists and environmental studies students. To some, nuclear energy is just another dead end, like fossil fuels; to others, it offers humanity its only real hope of addressing climate change.

The three editors of this series themselves project differently across a spectrum running from anti-nuclear to pro-nuclear, with an in-between that might best be called anti-anti-nuclear. Daniel Macfarlane is decidedly a nuclear pessimist, Dagomar Degroot sees an enduring role for nuclear fission on a limited scale, and Jim Clifford is not sure how to engage the nuclear debate within the context of continued inaction on carbon emissions. Each of the three will explain their basic positions below (in the first-person voice for the sake of coherence).

Daniel Macfarlane: In my opinion, any energy and economic system that continues to foster our consumption and lifestyles are part of the problem. As Nancy Langston will show in this series, nuclear power is undoubtedly better than coal. But the belief that we can continue with the same standards of living are dangerous - the addiction to growth and consumption is the major driver of ecological problems (I’m firmly in the Prophet, rather than the Wizard, camp, a dyad which Andrew Watson will explicate in his contribution to this series). And nuclear energy fosters that addiction, on top of the threats from nuclear waste (which Robynne Mellor will discuss), nuclear fallout (the focus of Toshihiro Higuchi’s contribution), and nuclear accidents (which Kate Brown will address).

I don’t think fully switching to nuclear power would solve our climate change problems. Nuclear power isn’t likely to stop us from driving cars and flying jets; from covering the earth in concrete, overconsuming meat, and having children. The only real energy solution is a drastic, huge reduction in energy consumption, primarily by the industrial and commercial sectors as well as those of us in the middle class and above within the  “developed” world. Nuclear power is a panacea, a magical silver bullet that will allow us to have our cake and eat it too (in this case, I literally mean cake, as well as all the other consumer products we want). It would mean that we wouldn’t have to change our lifestyles. In that sense, switching to nuclear power is kind of the energy systems equivalent of banning plastic straws. If it is the first step in a long, long line of progressively harder steps, then great; but if it becomes the end in itself, a panacea that leads us to think that we’re doing enough and we can rest on our laurels, then it is an obstacle (for the record, I use metal straws - but getting rid of plastics straws alone isn’t going to make a noticeable dent in our plastics problem). The problem is our current systems - economic, political, and social - and nuclear energy is just going to prop up the problem.

Today’s nuclear advocates sound an awful lot to me like the advocates of coal, petroleum, and hydropower from the past that I’ve researched - whose ranks often featured well-intentioned, educated, progressive, and preservationist-inclined folks. The history of energy transitions suggests that proponents and boosters of new energy forms are generally wrong about the hidden costs - why would nuclear be any different? Historians of all people should know about humanity’s propensity to irrationality stress the positives and downplay the negatives. If all of our other modern energy forms have been booby-trapped with major drawbacks once scaled up, why would nuclear be free of similar problems? I'm wary of any large-scale energy systems because of the rule of unintended consequences. I mean, when we started burning coal in the 19th century, who the heck thought that it could change the climate?! Who thought that hydropower reservoirs would concentrate mercury and emit methane? And none of these things are capable of being used as weapons of unimaginable mass destruction, or when decaying are hazardous to the health of living organisms for eons. All technologies bite back, and the bigger the technological system, the bigger the bite.

Dagomar Degroot: I’ll begin by admitting that I’m deeply sympathetic to Daniel’s point of view. Clearly, efforts should be made on every level – individual, municipal, national – to conserve energy and reduce consumption. Yet I am more skeptical than Daniel about the desirability, morality, and practicality of slashing our use of energy, and in turn our standards of living.

My view is that most of our environmental challenges stem from inefficient and often immoral political and economic systems: systems that promote inequality and ignore environmental costs. Engines of consumption and exploitation that privilege the whims of the privileged over the needs of the majority now promote the wholesale destruction of tropical forests, the exhaustion of the oceans, the pollution of the atmosphere – all the myriad interconnected environmental perils of the Anthropocene. In these processes, the core problem is not precisely how much energy we use as a species, but rather how we generate energy and then how we use it for industry, agriculture, and transportation. Today, governments choose to promote fossil fuels over cleaner alternatives, not only because industries that produce fossil fuels have disproportionate political power, but also because our built environment – from cars to sprawling cities – has historically reflected and demanded the use of fossil fuel technologies.

It doesn’t have to be this way. What we need are revolutionary policies and technologies that lead us to use energy more efficiently, to build with minimal carbon emissions, to generate energy cleanly, to promote social equality, and to privilege – above all else – environmental sustainability. Even in the developed world, policies that sharply reduce standards of living are to my mind unnecessary, even if they were politically viable (they aren’t). In the developing world, energy consumption will actually need to go up, lest millions remain consigned to desperate poverty. An important question for me is: how can we increase our consumption of energy while sharply reducing the environmental impact of energy consumption?

Renewable energy is booming, and nuclear fission reactors are a far less appealing energy source than, for example, solar power plants or wind farms. If truly transformative technologies – such as controlled fusion – ever get off the ground, nuclear fission will be even less competitive. Fission reactors are costly and time-consuming to build, and some designs at least have turned out to be unsafe. As you will read, they also come with a host of unique problems. Yet at present, renewable energy alternatives cannot generate power on sufficient scales – with sufficient consistency – for every community. In all probability, we will need to construct new nuclear fission plants in order to reduce our carbon emissions quickly enough to avert truly catastrophic climate change. And we should be especially wary of decommissioning older nuclear fission reactors. Given the present limitations of renewable energy, those reactors are too often replaced by coal or natural gas.

Yes, an enduring role for nuclear fission power is an unsavory prospect. But climate change on a scale that makes large parts of the Earth uninhabitable is considerably worse.

Jim Clifford: I find a lot to agree with in both Daniel and Dagomar's contributions. I will use my space to build on Dagomar’s quip that political, social and economic change on the scale necessary to dramatically reduce global energy consumption during the 2020s is not possible. As a historian who has studied social and political change in the face of deep environmental challenges during the nineteenth century and who has taught early twentieth-century European history for the past six years, I think the socio-economic and cultural optimism of the Prophets is perhaps more unrealistic than the techno-optimism camp. Wizards and engineers can point to significant developments in the electrification of transportation, the various ways to dramatically increase solar and wind capture and storage, along with the plans for saltwater biofuels and promising new nuclear technologies. What evidence do we have of rapid progress towards an empowered grassroots democracy that suggests we can upend our culture and convince people to accept a significant reduction in their standards of living in the next decade? The riots in France are in part a reaction to a relatively minor effort to reduce people’s diesel fuel consumption; the Ontario Liberals’ tepid embrace of green energy helped bring a populist into office to dismantle much of what they accomplished; Australia has yet another prime minister after a government collapsed for trying to bring about small changes; and Justin “Canada is back" Trudeau bought a pipeline.

So how do we achieve social, political, and cultural change on a global scale in a very short period of time? Individuals electing not to fly, having fewer children, or biking to work are not going to make a significant dent. We need societal change across the industrialized west. How in a democracy do we achieve this goal? I don’t see the power of the wealthy elite diminishing significantly or somehow transitioning away from capitalism on a global scale while maintaining enough stability to rapidly transition to a lower standard of living in a peaceful manner. We might end up with a global war that could devastate the global economy and our standard of living, but obviously, this is not the pathway we want to follow to solve the crisis, as it would just accelerate ecological destruction and human suffering. All of this is to say, when presented with the binary, I think the techno-optimism and the growing momentum for a Green New Deal to build this infrastructure, create jobs, and maintain middle-class standards of living  is the most viable path in the short to medium term.

I hope our culture will start to quickly shift in response to the populist moment we are living through and we can aim for a hybrid between the two approaches in the mid-century when fear sets in and the generation who come of age watching California burn reject the culture of their parents and grandparents. And we need Prophets to imagine a transition away from today’s consumption focused culture and to provide alternatives for people to embrace at some point in the future. In the meantime, if there is the local political will to invest billions of dollars in a few more nuclear energy plants, I don’t expect this will go very far to solve the problem or create a dramatic increase in the scale of the environmental risk our children face.

Series Calendar

All of contributors fall somewhere on this spectrum. We have purposefully sought out scholars with various viewpoints, and attempted to feature a diverse set of contributors. Below is the schedule for our first five posts, which have already been written. However, we are leaving the series open-ended - that is, we hope the posts will spark conversations and debates, and should any reader feel inclined to contribute their own post in response to the series, we are open to the possibility of adding more posts.

January 30: Nancy Langston, “Closing Nuclear Plants Will Increase Climate Risks.”
February 13: Kate Brown, “Next generation nuclear?”.
February 27: Andrew Watson, “Only Dramatic Reductions in Energy Use Will Save the World From Climate Catastrophe: A Prophecy.”
March 13: Robynne Mellor, "The Cold War Constraints of the Nuclear Energy Option."
March 27: Toshihiro Higuchi, "The Nuclear Renaissance in a World of Nuclear Apartheid."

Reconstructing Africa's Climate: Solving the Riddle of Rainfall

11/12/2018

 
Prof. David J. Nash, University of Brighton, UK, and University of the Witwatersrand, South Africa
Picture

To grasp the significance of global warming, and to confirm its connection to human activity, you have to know how climate has changed in the past. Scholars of past climate change know that understanding how climate has varied over historical timescales requires access to robust long-term datasets. This is not a problem for regions such as Europe and North America, which have a centuries-long tradition of recording meteorological data using weather instruments (thermometers, for example). However, for large areas of the world the ‘instrumental period’ begins, at best, in the late 19th or early 20th century. This includes Africa, where, with the exception of Algeria and South Africa, instrumental data for periods earlier than 1850 are sparse. To overcome such data scarcity, other approaches are used to reconstruct past climates, most notably through analyses of accounts of weather events and their impacts in historical documents.

Compared to the wealth of documentary evidence available for areas such as Europe and China, there are relatively few collections of written materials that allow us to explore the historical climatology of Africa. Documents in Dutch exist from the area around Cape Town that date back to the earliest European settlers in 1652, and Arabic- and Portuguese-language documents from northern and southern Africa, respectively, are likely to include climate perspectives from even further back in time. However, the bulk of written evidence for Africa stems from the late 18th century onwards, with a proliferation of materials for the 19th century following the expansion of European colonial activity.

These documents are increasingly used by historical climatologists to reconstruct sequences of rainfall variability for the African continent. This focus on rainfall isn’t surprising, given that rainfall was – and is – critical for human survival. As a result, people tended to write about its presence or absence in diaries, letters, and reports. In turn, these rainfall reconstructions are now used by historians as a backdrop when exploring climate-society relationships for specific time periods. It is therefore critical that we understand any issues with rainfall reconstructions in case they mislead or misinform.

This article will take you under the hood of the practice of reconstructing past climate change. Its aim is to: (a) provide an overview of historical climatology research in Africa at continental to regional scales; and (b) point out how distinct approaches to rainfall reconstruction in different studies can potentially produce very different rainfall chronologies, even for the same geographical area (which of course alters the kinds of environmental histories that can be written about Africa). The article concludes with some personal reflections on how we might move towards a common approach to rainfall reconstruction for the African continent.
 
Different approaches to rainfall reconstruction in Africa

Most historical rainfall reconstructions for Africa use evidence from one or more source type (Figure 1). A small number of studies are based exclusively upon early instrumental meteorological data. Of these, some (the continent-wide analysis by Nicholson et al. in 2018, for example) combine rain gauge data published in 19th-century newspapers and reports with more systematically collected precipitation data from the 19th to 21st centuries, to produce quantitative or semi-quantitative time series. Others, such as Hannaford et al. (2015), for southeast Africa, use data digitized from ship logbooks to generate quantitative regional rainfall chronologies. 
Picture
Fig. 1. Combinations of different types of narrative evidence and early instrumental data (circles) used in selected historical rainfall reconstructions for Africa (grey boxes).

Most reconstructions, however, draw on European traditions by using narrative accounts of weather and related phenomena contained within documentary sources (such as personal letters, diaries/journals, reports, newspapers, monographs and travelogues) to develop semi-quantitative relative rainfall chronologies. Some of the most widely available materials are those written by early explorers, missionaries, and figures of colonial authority. The use of such evidence permits the reconstruction of rainfall for periods well before the advent of meteorological data collection.
​
The greatest numbers of regional documentary-based reconstructions are available for southern Africa, which forms the focus of this article. These draw on documentary evidence from a combination of published and unpublished sources, often using available instrumental data for verification and calibration, and span much of the 19th century. Where information density permits, it has been possible to reconstruct rainfall variability down to seasonal scales (see, for example, a study by Nash et al. in 2016). There are, in addition, continent-wide series that integrate narrative information from mainly published sources with available rainfall data (Nicholson et al., 2012, for 90 homogenous rainfall regions across mainland Africa).

An important point to note is that the various reconstructions adopt slightly different methodologies for analyzing documentary evidence. For example, all of the regional studies in southern Africa noted above use a five-point scale to classify annual rainfall (from –2 to +2; extremely dry to extremely wet). Scholars decide how to classify a specific rainy season in a region through qualitative analysis of the collective documentary evidence for that season. In other words, they take into account all quotations describing weather and related conditions. This contrasts with the approach used by Nicholson and colleagues in a 2012 continent-wide rainfall series. In that reconstruction, scholars attributed a numerical score on a seven-point scale (–3 to 3) to each individual quotation according to how wet or dry conditions appear to have been. They then summed and averaged the scores for each item of evidence for a specific region and year. As we will see, these distinct analytical approaches, which may draw on different documentary evidence, may introduce significant discrepancies between rainfall series.
 
Comparisons between rainfall series

A compilation of all the available annually-resolved rainfall series for mainland southern Africa is shown in Figure 2. This includes seven series (g-m) based exclusively on documentary evidence, four regional series (c-f) from Nicholson et al. (2012) based on combined documentary evidence and rain gauge data, the 19th-century portion of the ships’ logbook reconstruction series (b) by Hannaford et al. (2015), and, for comparison, the 19th-century section of a width-based tree ring rainfall reconstruction (a) for western Zimbabwe by Therrell et al. (2006). With the exception of the Cape Winter Rains series, all are for areas of southern Africa that receive rainfall predominantly during the summer months.
Picture
Fig. 2. Annually-resolved rainfall reconstructions for southern Africa, spanning the 19th century. (a) Tree-ring width series by Therrell et al. (2006); (b) Ships’ logbook-based reconstructions by Hannaford et al. (2015); (c-f). Combined documentary and rain-gauge reconstructions by Nicholson et al. (2012); (g-m) Documentary-based reconstructions by (g) Nash et al. (2018), (h) Grab and Zumthurm (2018), (i) Kelso and Vogel (2007), (j) Nash and Endfield (2002, 2008), (k) Nash and Grab (2010), (l) Nash et al. (2016), (m) Vogel (1989).

This compilation shows that, in the 19th century, rainfall varied from place to place across southern Africa. However, we can identify a number of droughts that affected large areas of the subcontinent. Droughts, for example, stretched across southern Africa in the mid-1820s, mid-1830s, around 1850, early-mid-1860s, late-1870s, early-mid-1880s and mid-late-1890s. We can also pinpoint a smaller number of coherent wetter years: in, for example, the rainy seasons of 1863-1864 and 1890-1891. Analyses that use many different climate "proxies" - that is, sources that register but do not directly measure past climate change - indicate that the early-mid 1860s drought was the most severe of the 19th century, and that of the mid-late-1890s the most protracted (see, for example, studies by Neukom et al., 2014, and Nash, 2017).
​
The inset map in Figure 2 reveals that a number of rainfall series overlap in their geographical coverage, which allows a direct comparison of results. In some cases, the overlap is between series created using very different methodologies. For the most part, there is good agreement between these overlapping series, but there are some significant differences. The rest of this article will focus on two of these periods of difference: the 1810s in southeast Africa, and the 1890s in Malawi.
 
How dry was the first decade of the 19th century in southeast Africa?

Four rainfall series are available for southeast Africa for the first decade of the 19th century (Figure 3) – documentary series for South Central Africa and the Kalahari (by Nicholson et al., 2012), a tree-ring series for Zimbabwe (Therrell et al., 2005), and a ships’ log series for KwaZulu-Natal (Hannaford et al., 2015). Collectively, these series suggest that there was at least one major drought that potentially affected much of the region.

This was a very important time in the history of southeast Africa. The multi-year drought is remembered vividly in Zulu oral traditions as the ‘mahlatule’ famine (translated as the time we were obliged to eat grass). Scholars have seen it as a trigger for political revolution and reorganization, one that ultimately led to the dominance of the Zulu polity. 
Picture
Fig. 3. Comparison of three annually-resolved rainfall reconstructions for southeast Africa for the first half of the 19th century, including the tree ring series for Zimbabwe by Therrell et al. (2006), the combined documentary and rain-gauge reconstructions for South Central Africa and the Kalahari by Nicholson et al. (2012), and the ships’ logbook reconstructions for southeast South Africa by Hannaford et al. (2015). The inset map shows the location of each series.

Yet there are some discrepancies between the overlapping records, which have important implications for our understanding of relationships between climate change and society. For example, while the documentary-based South Central Africa series in Figure 3 suggests protracted drought from 1800 to 1811, the overlapping tree ring series for Zimbabwe infers periods of average or above-average rainfall, alternating with drought. A similar contrast is shown between the documentary-based Kalahari series (which encompasses the southern Kalahari but extends to the east coast of South Africa) and the overlapping ships’ logbook-based reconstruction for Royal National Park, KwaZulu-Natal.
​
Since these series are based on different evidence, it is impossible to tell which is more likely to be ‘right’. However, the rainfall series based on documentary evidence are clearly less sensitive to interannual rainfall variability than those based on ships’ log data or tree rings, at least for the early 19th century. This is surprising, as a major strength of documentary evidence is normally the way that it captures extreme events.

The reasons for these discrepancies are unclear, but are likely to be methodological. The Africa-wide rainfall series by Nicholson and colleagues, from which the South Central Africa and Kalahari series in Figure 3 are derived, is a model of research transparency – it identifies the evidence base for every year of the reconstruction, with all documentary and other data made available via the NOAA National Climatic Data Center. Inspection of this dataset indicates that the reconstructions for the early 1800s in southern Africa are based on a limited number of published monographs and travelogues, written mainly by explorers. While these are likely to include eyewitness testimonies, there is potential for bias towards drier conditions. The majority of authors were western European by birth and, in some cases, their writings reflected their first travels in the subcontinent. It wouldn’t be at all surprising if they found southern Africa significantly drier than home.
 
How dry was the last decade of the 19th century in Malawi?

The collective evidence for rainfall variability around present-day Malawi during the mid-late 19th century is shown in Figure 4. Here, two rainfall reconstructions overlap: the first, a reconstruction for three regions of the country based primarily on unpublished documentary evidence by Nash et al. (2018); and the South Central Africa series and adjacent rainfall zones of Nicholson et al. (2012). 
Picture
Fig. 4. Comparison of two annually-resolved rainfall reconstructions for southeast Africa for the second half of the 19th century, including a documentary-based reconstruction for three regions of Malawi (Nash et al., 2018), and the combined documentary and rain-gauge reconstruction for South Central Africa by Nicholson et al. (2012). The inset map shows the location of each series.

Extreme events, such as the droughts of the early-1860s, mid-late-1870s, and mid-late-1880s, and a wetter period centred around 1890-91, are visible in both reconstructions. However, there are discrepancies in other decades, most notably during the 1890s where the Nicholson series indicates mainly normal to dry conditions, and the Nash series a run of very wet years.
​
Delving deeper into the documentary evidence underpinning the Nicholson series suggests that the discrepancies may again be methodological, and strongly influenced by source materials. As with the other regional reconstructions for southern Africa, the Nash study bases annual classifications on average conditions across a large body of mainly unpublished primary documentary materials. Nicholson, by contrast, uses smaller numbers of mainly published documentary materials, combined with rain gauge data. An over-emphasis of references to dry conditions in these documents, combined with an absence of gauge-data for specific regions and years, could therefore skew the results. 
 
The way forward?

There are two main take-home messages from this article. First, on the basis of a comparison of annually-resolved southern African rainfall series, documentary data appear less sensitive to precipitation variability than other types of proxy evidence, even for some extreme events. Discrepancies are most apparent for periods of the early 19th century, where documentary evidence is relatively sparse.

Second, different approaches to reconstruction may produce different results, especially where documentary evidence is combined with gauge data. The summative approach used by Nicholson and colleagues, for example, where individual quotations are classified, summed and averaged, may be much more sensitive to bias from individual sources when data are sparse.

Having identified these potential issues, one way forward might be to run some experimental studies using different approaches on the same collections of documentary evidence to assess the impact of methodological variability on rainfall reconstructions. This would be no small task, as it would mean re-analyzing some large datasets. However, it would confirm or dismiss the suggestions made here about the relative effectiveness of different methodologies.

These experimental studies would help us to identify the "best practice" for reconstructing African rainfall. They would allow us to improve the robustness of the baseline data available for understanding historical rainfall variability in the continent likely to be most severely impacted by future climate change. They would also permit us to refine our understanding of past relationships between climatic fluctuations and the history of African communities. These relationships may offer some of our best perspectives on the future of African societies in a warming planet. 

Works Cited: 

Brázdil, R. et al. 2005. "Historical climatology in Europe – the state of the art." Climatic Change 70: 363-430.
​
Grab, S.W. and Zumthurm, T. 2018. "The land and its climate knows no transition, no middle ground, everywhere too much or too little: a documentary-based climate chronology for central Namibia, 1845–1900." International Journal of Climatology 38 (Suppl. 1): e643-e659.

Hannaford, M.J. and Nash, D.J. 2016. "Climate, history, society over the last millennium in southeast Africa." Wiley Interdisciplinary Reviews-Climate Change 7: 370-392.

Hannaford, M.J. et al. 2015. "Early-nineteenth-century southern African precipitation reconstructions from ships' logbooks." The Holocene 25: 379-390.

Kelso, C. and Vogel, C.H. 2007. "The climate of Namaqualand in the nineteenth century." Climatic Change 83: 257-380.

Nash, D.J., 2017. Changes in precipitation over southern Africa during recent centuries. Oxford Research Encyclopedia of Climate Science, doi: 10.1093/acrefore/9780190228620.013.539.

Nash, D.J. and Endfield, G.H. 2002. "A 19th century climate chronology for the Kalahari region of central southern Africa derived from missionary correspondence." International Journal of Climatology 22: 821-841.

Nash, D.J. and Endfield, G.H. 2008. "'Splendid rains have fallen': links between El Nino and rainfall variability in the Kalahari, 1840-1900." Climatic Change 86: 257-290.

Nash, D.J. and Grab, S.W. 2010. "'A sky of brass and burning winds': documentary evidence of rainfall variability in the Kingdom of Lesotho, Southern Africa, 1824-1900." Climatic Change 101: 617-653.

Nash, D.J. et al. 2018. "Rainfall variability over Malawi during the late 19th century." International Journal of Climatology 38 (Suppl. 1): e649-e642.

Nash, D.J. et al. 2016. "Seasonal rainfall variability in southeast Africa during the nineteenth century reconstructed from documentary sources." Climatic Change 134: 605-619.

Neukom, R. et al. 2014. "Multi-proxy summer and winter precipitation reconstruction for southern Africa over the last 200 years." Climate Dynamics 42: 2713-2716.

Nicholson, S.E. et al. 2012. "Spatial reconstruction of semi-quantitative precipitation fields over Africa during the nineteenth century from documentary evidence and gauge data." Quaternary Research 78: 13-23.

Nicholson, S.E. et al. 2018. "Rainfall over the African continent from the 19th through the 21st century." Global and Planetary Change 165: 114-127.

Pfister, C. 2018. "Evidence from the archives of societies: Documentary evidence - overview". In: White, S., Pfister, C., Mauelshagen, F. (eds.) The Palgrave Handbook of Climate History. Palgrave Macmillan, London, pp. 37-47.

Therrell, M.D. et al. 2006. "Tree-ring reconstructed rainfall variability in Zimbabwe." Climate Dynamics 26: 677-685.

Vogel, C.H. 1989. "A documentary-derived climatic chronology for South Africa, 1820–1900." Climatic Change 14: 291-307.

Two Decades from Disaster? The IPCC's "Global Warming of 1.5° C."

10/11/2018

 
Dr. Dagomar Degroot, Georgetown University
Picture
Seasonal air temperatures across the Arctic are warming fast. NSIDC

​International climate change agreements have long aimed at limiting anthropogenic global warming to 2°C Celsius, relative to “pre-industrial” averages. Yet in early 2015, more than 70 scientists contributed to a report that warned about then-poorly understood dangers of warming short of 2° C. Several months later, Parties to the United Nations Framework Convention on Climate Change (UNFCCC) met in Paris and reached what seemed to be a promising agreement that aimed at keeping global warming to “well below” 2° C.
 
The Paris Agreement invited the Intergovernmental Panel on Climate Change (IPCC) to prepare a special Assessment Report on the consequences of, and means of avoiding, warming on the order of 1.5° C. The IPCC accepted the invitation in 2016. This week, it published its report, together with a short summary for policymakers. In it, 91 scholars from 40 countries summarize the results of more than 6,000 peer-reviewed articles on climate change.
 
The published report still captured headlines and provoked justified alarm. Here’s what struck me as I read through it this week.   

Picture
An IPCC graphic shows that if greenhouse gas emissions continue unabated, Earth's temperature will likely cross the 1.5° C threshold by 2040. Just 40 years from now.

The Importance of 0.5° C

​The new assessment report is really all about half a degree Celsius. Since the world has now warmed by roughly one degree Celsius since a 1850-1900 baseline, only half a degree Celsius separates us from 1.5° C of warming, which in turn is of course only half a degree removed from the infamous 2° C threshold previously emphasized by the IPCC and the UNFCCC.
 
So, what difference does 0.5° C make?
 
It depends on the perspective you take. The new report shocked many by predicting that some of the profound environmental transformations long anticipated with 2° C warming, relative to the nineteenth-century baseline, would be well underway with just 1.5° C warming. By 2100, for example, sea levels could rise by up to 0.77 meters if temperatures increase by no more than 1.5° C, which may only be around 0.1 meters lower than the level they would reach with 2° C warming.
 
Worse, it now seems that marine ice cliff instability in Antarctica and the irreversible collapse of the Greenland ice sheet – two frightening scenarios that could each raise global sea levels by many more meters and set off additional tipping points in the climate system (see below) – could be triggered by just 1.5° C warming. Perhaps 90% of the world’s coral reefs could be lost with 1.5°C warming, compared to every reef in a 2° C world. In one sense, there seems to be a huge gulf between where we are now and where we’d be with another half-degree of warming, and comparatively little difference between a 1.5° C and a 2° C world.
 
Yet in other, critically important ways, there could be an equally big gap between the 1.5° C and 2° C scenarios. Of 105,000 species considered in the report, three times as many insects, and twice as many plants and animals, would endure profound climatically-determined changes in geographic range in a 2° C world, relative to a 1.5° C world. A threefold increase in the terrestrial land area projected to change from one ecosystem to another also seperates the 1.5° C and 2° C worlds.  
 
The importance of just half a degree Celsius is particularly clear at local and regional scales. Warming is greater on land than at sea, and much greater across the Arctic. In cities, urban heat islands double or even treble global warming trends. In many regions, extreme weather events – droughts, torrential rains, heat waves and even cold snaps – are now much more likely to occur than they once were. Superficially modest trends on a global scale can mask tremendous shifts in local or regional weather. 
 
The critical importance of warming on the order of just half a degree Celsius for ecosystems around the world invites us to revisit some of the more controversial claims made by climate historians about the environmental impacts of past climate change. There is, of course, a range of natural climatic variability that most ecosystems can accommodate, and we are on the verge of leaving that range across much of the Earth, if we have not left it already. Yet it now seems painfully clear that even small fluctuations in Earth’s average annual temperature can have truly profound ramifications for the regional or local ranges and life cycles of plants, animals, and microbes.  
 
More on this later.

Picture
Even today, older ice is all but gone in the Arctic. NSIDC

Risk, Uncertainty, and Scale

Back to the difference between 1°, 1.5°, and 2° C warming. Perhaps the most alarming part of the new assessment report – one seemingly lost on many environmental journalists – is what it says about the value of the whole project of establishing numbers that become touchstones in climate change discourse.
 
Beginning well before 2015, climate scholars pointed out that attempts to emphasize the danger of 2° C warming risked creating the false impression, among policymakers and the public, that the worst impacts of climate change would suddenly unfold only after Earth passed that threshold. I’d wager that for most people, the 2° C limit still seems like a distant threat, one we will eventually face only if we don’t gradually reduce greenhouse gas emissions. Yet the new report confirms that the 2° C threshold was all along an arbitrary standard, one that did not really convey the nature of the threat we face.
 
It is now plain that we will not only trigger vast and irreversible changes to Earth’s climate system with 1.5° C warming, but worse: we have already engaged some of them, and we may unlock more in coming years. Nature, unfortunately, cares little for our nicely rounded numbers.
 
There’s more. Even before its release, some climate scientists and activists criticized the new report for understating the extent of present-day warming, and more importantly: ignoring the so-called “fat tail” threat of runaway warming triggered by positive feedbacks, or “tipping points,” buried within the climate system. Of these feedbacks, the best known is probably the ice-albedo feedback, where modest warming melts bright sea ice that normally bounces sunlight back into space, replacing it with dark water that absorbs solar radiation, contributes to more warming, more melting, and so on. Another well-known feedback involves methane that now lies buried in Arctic permafrost and frozen seabed sediment. Warming is already melting tons of methane into the atmosphere, where it traps far more heat than carbon dioxide. Again: warming will lead to more melting, more warming, and so on.  
 
These feedbacks have historically converted relatively minor sources of warming or cooling into profound climatic trends. Yet scientists don’t know precisely how all such feedbacks work, or when exactly they are triggered. If we’re lucky, we won’t trigger most of them at all, not even if we reach that 2° C threshold. Yet it’s also possible that we have already set some of them in motion, in ways that will irreversibly lead us to runaway warming. In that case, our children – or our children’s children – will have to survive a “hothouse” world, one fundamentally different from our own. 
 
This brings us back to the issue of risk and probability, which the IPCC’s scientists stress in every assessment reports. The IPCC expresses uncertainty using the terms “confidence” and “likelihood,” and it is very important to decode these terms in order to understand the new assessment report.
 
In it, the terms “very low,” “low,” “medium,” “high,” and “very high” confidence all refer to the confidence the report authors have in key findings. Their level of confidence reflects both the quality, quantity, and type of evidence used to support those findings, and the extent to which different lines of evidence agree with the findings. In the report, the terms “exceptionally unlikely,” “extremely unlikely,” “very unlikely,” “unlikely,” “about as likely as not,” “likely, “very likely,’ and “extremely likely” all refer to the statistical probability of outcomes actually happening, based in part on the findings.
 
Unfortunately, these definitions are buried on page 40 of the first chapter of the new assessment report. Yet they make it impossible to conclude, as so many journalists have written, that the new report predicts what will happen in the future. Using such language plays into the hands of climate change deniers, who rightly point out that nobody can predict the future with certainty. Climate scientists, of course, know that better than most, which is why they always attempt to qualify and quantify their predictions.
 
Climate historians also deal with probability. As I point out in a forthcoming article, we can never really reconstruct the local or regional manifestations of climate change with perfect certainty, nor can we be completely sure of local connections between climatic trends and human or animal behavior. Some of the most interesting relationships we can discern are among the least documented, especially once we get away from the usual European or Chinese focus of most climate historians. At its best, climate history therefore deals explicitly with risk and uncertainty by qualifying its major findings.
 
Other historians tend to rebel against such qualifiers. More than one peer reviewer, for example, has told me to be more authoritative, to forcefully express that climate change directly caused humans to do something in the past. To these historians, qualifiers communicate the kind of weak uncertainty that seems to suggest that an argument is not well grounded on solid scholarship.

​These criticisms come from the practice of traditional historical scholarship, where documents seem to communicate exactly what happened to whom, and when. Yet when we work with different kinds of sources, from natural archives, those relationships cannot always be clearly or simply established. The more, and more diverse, information you have, the more uncertain the past can become. Still, abundant information from many natural and human sources can also provoke questions, and suggest relationships, that traditional historians have not imagined. The best documented and seemingly most certain version of the past, in other words, isn't always the most accurate.

Picture
Global temperature anomalies in August, relative to the 1951-1980 mean. NASA GISS

A Determined Future (and Past?)

​The new IPCC report also abounds with exactly the kind of sweeping statements that historians – myself included – have attacked in the past. In a brilliant 2011 article I often ask my students to read, Mike Hulme criticizes how the predictive natural sciences have promoted a new variant of the climate determinism many Europeans once used to explain the expansion of western empires. Because climate change can be predicted more easily than social change, Hulme argues, climate science has promoted a kind of climate reductionism that “downgrades human agency and constrains the human imagination.” Surely, the human future will not be crudely determined by climatic trends.

And yet, the IPCC concludes that climate change will probably exacerbate poverty, provoke catastrophic migration, impede or annihilate economic growth, amplify the risk of disease, and, in short, sharply undermine human wellbeing, especially for “disadvantaged and vulnerable populations, some indigenous peoples, and local communities dependent on agricultural or coastal livelihoods.” While there will be ways for communities and societies to adapt, the IPCC’s summary for policymakers finds that “there are limits to adaptation and adaptive capacity for some human and natural systems” in even a 1.5° C world. Warming, in short, will likely provoke human suffering.
 
Should we be skeptical of such claims? As usual: it’s complicated. While historians like to emphasize complexity and contingency in narratives of the past, scientists search for patterns in complex systems that permit predictive models. Many historians therefore feel uncomfortable when asked to anticipate the future in light of the past, and that discomfort is partly to blame for the unfortunate absence of historical scholarship in the new assessment report. Of course, scientists have no such problem. Their models can indeed be deterministic – by necessity, they only consider so many variables – yet they still provide some of our best perspectives on what the future might have in store.
 
With that said, it's important to again stress that many of these models should be taken as estimates of what might happen in our future. They cannot tell us what will actually unfold. History does teach us that the human story involves both steady trends and sudden leaps: technological breakthroughs, revolutions, and the like. It is simply wrong to conclude that even 2° C warming will make the world a more impoverished, more violent place, as some assume. Older predictions of “peak oil,” for example, or a “population bomb” have not (yet) come to pass, partly because individuals and institutions responded creatively, on many different scales, to menacing trends.
 
The new assessment report actually describes the kind of action that would help state and non-state actors confront the challenge of climate change (spoiler alert: it’s not geo-engineering). People are not passive, static victims in the IPCC’s assessment, as they tend to be in reductionist literature. Yet even if the wise policies recommended by the IPCC are ignored, we cannot predict or quantify exactly what the future has in store for humanity.
 
But now, back to those controversial claims made by climate historians. Critics have accused some of the more ambitious books in climate history – authored by the likes of John Brooke and Geoffrey Parker, for example – of crude determinism for suggesting that past climatic fluctuations on the order of just half a degree Celsius unleashed disaster for societies around the pre-modern world. Indeed, some calamities that climate historians have blamed on climate change had many alternate or additional causes, and most coincided with examples of communities and societies successfully weathering climate change.
 
Yet the scale of social disruption predicted by the IPCC in a world just a little warmer than our own does invite us to consider whether the fates of pre-industrial societies were not more closely connected to climatic trends than most historians and archaeologists have allowed. Both environments and societies, in other words, seem more vulnerable to even slight climatic fluctuations than we had imagined. 

Picture
Does hope drive action on global warming? Numbers represent the percentage of those who were hopeful (purple) and not hopeful (grey) about global warming. Yale Program on Climate Change Communication.

The Popular Response

​In the wake of the new report, articles in popular media and discussions in social media have predictably focused on how its findings should be communicated. Should climate communicators try to drum up fear, or should be we inspire hope?
 
Hope does seem to be more useful emotion in motivation public engagement on global warming. Yet recently, many scholars have actually moved beyond this question. A 2017 article in the journal Nature, for example, concludes that climate change messages should be carefully calibrated to their audience. Neither hope nor fear will motivate everyone; in fact, a catchall message that relies on either emotion will likely provoke the opposite of the desired response in a sizable part of the population. This is hardly surprising: political operatives and advertisers have known for years that the best messages are highly targeted.
 
Some news articles have stressed what individuals can do in order to lower their personal carbon emissions. Many scientists and environmentalists have responded by arguing that only government policy can begin to address climate change on the scale we need. By stressing personal accountability, some argue, journalists shift attention from the real climate culprits: the big corporations and well-funded political interests with a stake in the fossil fuel economy.
 
As in most such debates, both sides have valid points. Clearly, we should all aim to limit our emissions while at the same time becoming politically motivated as never before. We will need to fundamentally transform our economy and our politics – quickly! – if we are to confront the challenge of climate change. How we do this, and what it will mean for us personally, is something all of us will need to sort out soon. Those of us who have chosen to remain above the political fray will need to re-evaluate that decision.
 
Some of the first news articles about the IPCC's new report included commentaries on the need for journalists to make climate change the biggest story they cover. I wrote an article to that effect during the 2016 election, and sent it to the editors of the New York Times. Predictably, there was no response. And now, even in the wake of hurricane Michael’s rapid intensification and calamitous landfall, climate change has already moved off the front pages of many newspapers. Politicians in the United States and elsewhere have already brushed off the IPCC’s urgent warnings.
 
One wonders: will the new report really change anything? Or will the capitalist dynamics behind our media outlets and political processes derail the changes we so urgently need to make? Ultimately, it’s clear that nobody and nothing will rise to save us from our fate. Those of us who understand the science behind climate change need to do more than communicate. Now, we need to actively be part of the solution. We need to act. 

Recommended Reading

The IPCC Assessment Report and Summary for Policymakers

The Impacts of Climate Change at 1.5C, 2C and Beyond

Analysis: Why the IPCC 1.5C Report Expanded the Carbon Budget

How to Write About a Vanishing World

Is There a Better Way To Do Climate History? Testing a Quantitative Approach.

8/31/2018

 
Dr. Dagomar Degroot, Georgetown University
Picture
Can correlating climatic and social variables tell us more about the human costs of climate change?

​Until recently, it was notoriously difficult to connect today’s extreme weather with the gradual trends of climate change. Scientists shied away from saying, for example, that catastrophic droughts or severe hurricanes reflected the influence of anthropogenic global warming. Yet today, scientists use big data from satellites and weather stations to inform supercomputer simulations that reveal the extent to which warming trends have raised the odds for previously unusual weather. Scientists now report, for instance, that the drought that crippled Syria between 2006 and 2009 was between two and three times more likely in today’s climate than it would have been a century earlier. They feel comfortable concluding that the rains of Hurricane Harvey were perhaps 20 times more likely now than they once were. Armed with these statistics, many scholars and journalists now conclude that events like the Syrian Civil War, which unfolded in the wake of that devastating drought, can be convincingly connected to climate change.
 
Yet how can we link past climate change – change that happened before the advent of big weather data – to human affairs? Many historians and archaeologists favor qualitative methods. They identify weather events in surviving documents, or in paleoclimatic proxy data (such as tree rings, ice cores, or lakebed sediments) that register the influence of temperature or precipitation. Next, they carefully study texts or ruins to determine how these weather events influenced human activities – such as farming, hunting, or sailing – that clearly depended on favorable weather. By looking at enough of these relationships, over a long enough timeframe, they ultimately reach conclusions about the influence of weather trends – that is, climate change – on the human past.
 
Environmental historians might be most familiar with these qualitative methods. They inform a raft of new books and articles in climate history, on diverse topics that range from the fall of Rome to the colonization of Australia; from the origins of apocalyptic Norse mythology to the travails of Arctic whalers.
Picture
A selection of recent books on climate history, all written by historians.

​But these qualitative methods are much less influential beyond the historical profession. Today, there is a large and rapidly growing “quantitative” school of climate history that instead relies on statistical means to discern the impact of climate change on human history. Papers in this school are cited more frequently in the latest IPCC assessment report, for example, than publications written by historians who prefer more qualitative means of doing history. 

Natural scientists, economists, and historical geographers in the quantitative school of climate history quantify diverse social variables in particular regions, then graph their highs and lows over decades, centuries, even millennia. Next, they develop or make use of temperature or precipitation reconstructions for those same regions across identical timescales. Finally, they use statistical methods to find covariance between their graphs of social and climatic trends.
 
Most published work in this vein finds statistically significant correlations between these trends. In study after study, Chinese historical geographers have found striking correlations between climatic cooling and the wars, rebellions, and dynastic transitions of Imperial China. European scholars have found equally impressive correlations between cool, wet conditions and conflict in northwestern Europe over the past five centuries. In southeastern and central Europe, by contrast, correlations exist between conflict and warm, dry weather. Another, even more ambitious study finds strong correlation between European wars and climate changes over 2,600 years.
 
Quantitative climate historians often focus on China and Europe, and not only because most of them live in these regions. People across much of China and Europe have long relied on rain-fed agriculture, which should have been especially sensitive to fluctuations in temperature or precipitation. They also kept unusually detailed, and unusually continuous, records of their activities. Yet a growing group of quantitative researchers now concentrates on the much more recent history of sub-Saharan Africa, where millions continue to rely on rain-fed agriculture. Many studies correlate warming, drying trends across Africa to twentieth-century civil wars, although some emphasize that these correlations only existed under the right socioeconomic conditions.
 
The great appeal of quantitative approaches to climate history is that they seem to replace the messiness of the historian’s craft, and the subjectivity of the qualitative findings, with scientific objectivity and certainty. Quantitative historians have used statistical correlation not only to confidently explain the past, but also to predict the future. Already in 2007, historical geographers concluded, for example, that Chinese “war-peace, population, and price cycles in recent centuries have been driven mainly by long-term climate change.” Two years later, another group controversially concluded that the frequency of civil wars in sub-Saharan Africa would likely increase as the continent warmed, since a regional correlation between temperature and violence existed in the past.
Picture
An IPCC projection of average global temperatures in 2081-2100, relative to late twentieth-century averages, under a moderate emissions scenario. Will a warmer Earth be a more violent place?

​But have quantitative scholars really found a better way of doing climate history, one that at last permits predictions of the kind that always remain frustratingly out of reach for historians and archaeologists? Well, not quite. On close examination, the soaring claims made by many quantitative scholars in fact rest on assumptions that remain frustratingly subjective . . . and at times, simply misguided.
 
Most importantly, the correlations identified in quantitative work are meaningless unless their trends reflect the right data. Some studies of this kind use weather observations in surviving documents to graph centuries-long trends in temperature or precipitation. Decades ago, the great meteorologist Hubert Lamb relied on much the same method to identify a hot climatic regime that he called the “Medieval Warm Period.” The medieval centuries, Lamb concluded, were at least as warm as the late twentieth century.
 
While some deniers of anthropogenic global warming still use Lamb’s graph, scholars have changed the name of his period to the “Medieval Climate Anomaly.” It turns out that Lamb, unversed in the art of reading historical sources, simply took medieval references to weather at face value. When the legitimate weather observations examined by Lamb are read more carefully, and used alongside climate reconstructions compiled with more reliable tree ring or ice core data, they reveal a period of modest but erratic warming in the high medieval centuries. Nothing comparable, in other words, with late twentieth-century warming.
 
The lesson here is that references to weather in ancient documents do not always simply reveal the state of the atmosphere in a particular time and place. The problem is much more acute when considering very long timescales. Before the instrumental era, even seemingly reliable weather observations over decades or centuries are really the product of many observers, some of whom might use different methods to record weather. Moreover, sources that may seem especially dependable at a glance – such as many European chronicles – in fact refer to weather metaphorically, or use fabricated weather events to justify the course of human affairs.
 
Researchers should therefore strive to use weather observations in historical documents as a starting point – only a starting point! – in a long process of reconstructing a region’s climate. Where possible, documentary evidence should be used alongside climate reconstructions compiled with tree rings, ice cores, lakebed sediments, and the many other proxy sources in natural archives. The best climate reconstructions often use the most proxies. Of course, many excellent reconstructions have now been published for most parts of the world. There is often little need to develop a regional climate reconstruction from scratch.
 
In quantitative climate history, multi-proxy climate reconstructions should also reveal climatic trends on the same spatial scale as the social variables under consideration. Even some scholars who do use so-called “multiproxy” climate reconstructions to find their correlations go on to match trends of global or hemispheric temperature or precipitation with trends of local or regional historical events. Yet before the onset of anthropogenic global warming, climatic trends rarely unfolded at the same time in every part of the globe. A general cooling trend across the northern hemisphere, for example, did not always lead to colder temperatures in China.  
 
If quantitative climate historians face problems when choosing which climate reconstructions to use – or how to make them – these pale in comparisons to those that bedevil their attempts to quantify social variables. Quantitative studies of war, for example, have used makeshift and now defunct websites to determine when wars began and ended. Others have relied on historical scholarship that is well over a century old. It is as though the historical geographers, political scientists, natural scientists, and economists who typically write quantitative climate history do not recognize that the disciplines of history and archaeology are as rigorous and dynamic as their own. Naturally, correlations that rely on obsolete or untrustworthy data about the human past can tell us little about the influence of climate change on human history.
 
Jan de Vries, Philip Slavin, and I have also flagged a second big problem faced by quantitative approaches to climate history. Studies that find correlations over centuries, let alone millennia, rarely appreciate that social variables change through time. A statistically significant correlation between warming and economic growth in the high medieval centuries, for example, does not necessarily hint at the same kind of relationship between climate change and human affairs as a similar correlation several hundred years later. Over the course of those centuries, the cultural, economic, social, and political pathways by which climate change affects human life may have fundamentally changed, and the individuals who control those pathways will have obviously died. The question becomes: what are quantitative approaches to climate change really measuring?
 
That gets us into a third, and related, problem of quantifying the human past. It is one thing to quantify a particular kind of agricultural production over long timespans. Though agricultural practices can change dramatically over those timespans, even in pre-modern societies, scholars may still find correlations between agricultural yields and climatic trends that can suggest something new about the human past. Yet it is quite another matter to quantify the number or intensity of a major social event, such as a war.
 
Attempts to link the number of wars by decade to decadal temperature or precipitation, for example, face the challenge of quantifying long and complex wars: precisely the kind of war that often placed the greatest strain on agricultural resources also affected by climate change. Scholars might consider the Thirty Years’ War, for instance, as either a single war or a series of wars, and their subjective choice would determine the correlation identified in a study between seventeenth-century climate change and European conflict. In some of these studies, the early seventeenth century may look like a time of relative peace in Germany!
 
Scientists have also used arbitrary numbers to decide when violence amounts to a war. Does violence rise to the status of war when at least 1,000 people have died, as some studies assume? Presumably the standard would be higher in very populous societies and lower in less populated ones, but this distinction is never made in quantitative studies. Graphing wars by quantity can also lead scholars to misrepresent changes in quality. Scholars might easily count the First and Second World Wars as only two wars, for example, yet of course their material and human costs dwarfed those of any previous conflict. If problems of this nature plague the superficially simple task of correlating the number of wars to temperature trends, imagine the challenges of determining similar correlations with, for example, economic development or cultural efflorescence! 
Picture
A nineteenth-century imagining of the death of Gustavus Adolphus, a turning point in the Thirty Years' War. How should quantitative climate historians make sense of particularly long, bloody wars? Carl Wahlbom, "Death of King Gustav II Adolf of Sweden at the Battle of Lützen." Nationalmuseum, 1854.

​It turns out that quantitative approaches to climate history often obscure more than they reveal. Far from providing a more objective, “scientific” way of understanding the impact of climate change on the human past, they really rely on assumptions that are every bit as subjective as those made in more qualitative work. Yet unlike many qualitative climate historians, they leave those assumptions unacknowledged.
 
I am convinced that quantitative climate historians could fruitfully address at least some of these problems by interacting more with qualitative scholars, most of whom work in the humanities. Unfortunately, many historians, at least, have not heard of quantitative approaches to climate history, while most quantitative scholars have little inkling of qualitative approaches to their subject. Remarkably, I have never seen a work of qualitative climate history cited within a paper that aims to identify correlation. Part of the problem is that quantitative and qualitative scholars often work in different media. While historians prioritize books, most scientists, economists, and geographers value short, multi-authored studies.
 
Yet collaboration is surely possible, and if so it would undoubtedly prove productive. Quantitative scholars have recently used statistical means to identify not only how climate change might be correlated to human activities, but also how it might have partly accounted for – that is, caused – those activities. Such studies have yielded models that are really variants of models that qualitative scholars had already developed. What if they had worked with qualitative scholars from the start? Meanwhile, qualitative scholars often use statistics to support their conclusions, without always understanding what those statistics actually reveal (or what they don’t). What if qualitative scholars consulted colleagues in more quantitative disciplines while developing these statistics?
 
At the Climate History Network, we will strive to incorporate more quantitative scholars within our ranks. Perhaps we will be able to build a shared community in the coming years, one that will yield a more comprehensive kind of climate history.

Works Cited:
​
Buhaug, Halvard. “Climate not to blame for African civil wars.” Proceedings of the National Academy of Sciences 107:38 (2010): 16477-16482.
 
Büntgen U. et al. “2500 years of European climate variability and human susceptibility.” Science 331:6017 (2011): 578-582.
 
Burke, Marshall B. et al. “Climate robustly linked to African civil war.” Proceedings of the National Academy of Sciences 107:51 (2010): E185-E185.
 
Burke, Marshall B. et al. “Warming increases the risk of civil war in Africa." Proceedings of the National Academy of Sciences 106:49 (2009): 20670-20674.

​Degroot, Dagomar. “Climate Change and Conflict,” in The Palgrave Handbook of Climate History, eds. Christian Pfister, Franz Mauelshagen, and Sam White. Basingstoke: Palgrave Macmillan, 2018. 

​Slavin, Philip. “Climate and famines: a historical reassessment.” Wiley Interdisciplinary Reviews: Climate Change 7:3 (2016):

Theisen, Ole Magnus. “Climate clashes? Weather variability, land pressure, and organized violence in Kenya, 1989-2004.” Journal of Peace Research 49:1 (2012): 81-96.
 
Theisen, Ole Magnus, Helge Holtermann, and Halvard Buhaug. “Climate wars? Assessing the claim that drought breeds conflict,” International Security 36:3 (2011): 79-106.

Tol, Richard and Sebastian Wagner. “Climate change and violent conflict in Europe over the last millennium,” Climatic Change 99 (2010): 65-79.

Zhang, David. “Climate Change and War Frequency in Eastern China over the Last Millennium,” Human Ecology 35 (2007): 403-414.

Zhang, David and Harry Lee, “Climate Change, Food Shortage and War: A Quantitative Case Study in China during 1500-1800,” Catrina 5:1 (2010): 63-71.

Zhang, David et al., “Climatic change, wars and dynastic cycles in China over the last millennium,” Climatic Change 76 (2006): 459-477.

Zhang, David, Peter Brecke, Harry F. Lee, Yuan-Qing He, and Jane Zhang. “Global climate change, war, and population decline in recent human history.” Proceedings of the National Academy of Sciences 104:49 (2007): 19214-19219.

Zhang, David. “The causality analysis of climate change and large-scale human crisis,” Proceedings of the National Academy of Sciences 108:42 (2011): 17296-17301.

Zhang, Dian et al., “Climate change, social unrest and dynastic transition in ancient China,” Chinese Science Bulletin 50:2 (2005): 137-144.

Zhibin Zhang, “Periodic climate cooling enhanced natural disasters and wars in China during AD 10-1900,” Proceedings of the Royal Society 277 (2010): 3745-3753.

Are Woodland Caribou Doomed by Climate Change?

7/26/2018

 
Dr. Nancy Langston, ​Michigan Technological University
Picture
Woodland caribou on Michipicoten Island in Lake Superior. Credit: Christian Schroeder.

​In May 2018, woodland caribou (Rangifer tarandus caribou) were declared functionally extinct in the United States. The last remnant population in the Selkirks Mountains of Idaho dwindled to three lone females.  In the Lake Superior basin, a genetically distinct population of woodland caribou nearly met the same fate in February 2018. Populations in Quebec, Newfoundland, and Alberta are dwindling as well. Across the Canadian north, woodland caribou have disappeared from roughly half their 19th century range. Is climate change dooming woodland caribou? Or are managers using climate change as an excuse to avoid making difficult policy decisions that could save the caribou but antagonize industry and environmental groups? 
Picture
Historic and current range of woodland caribou 2012. The current distribution of boreal caribou is shown in brown. The estimated southern extent of historical woodland caribou distribution is indicated by the dashed line. Credit: Environment Canada.

Woodland caribou are part of the globally-distributed species Rangifer tarandus, which includes reindeer in Eurasia, barren ground caribou across the North American Arctic, and woodland caribou in the boreal subarctic. Members of the Cervidae genus, which includes deer, elk, and moose, caribou thrive in a variety of habitats. They are a migratory species that can cover vast distances—or adapt to much shorter migrations. Barren ground caribou and Eurasian reindeer are famous for migrations covering more than a thousand kilometers, while Lake Superior woodland caribou have shorter movements from wintering to calving ranges.
​
One of the few large megafauna species to expand rather than go extinct in the Late Pleistocene, caribou survived repeated glaciations by moving to ice-free refugia. Boreal woodland caribou found refuge in the Appalachian mountains, while Eurasian reindeer moved to what’s now Italy, France, and Spain. Each time the ice retreated in an interglacial period, caribou followed the melting ice north, expanding into new habitats across a diverse, warming landscape.
Picture
Global range of caribou and reindeer. Credit: Donna Naughton, Cephas: The Natural History of Canadian Mammals (Toronto: U. of Toronto Press). CC BY-SA 4.0.

At least 10,000 years ago, people began following caribou on their journeys north, and a striking diversity of human-caribou relationships developed across the north. In Siberia and Mongolia, Indigenous peoples fully domesticated reindeer, creating close spiritual and material relationships with them. In Sápmi (northern Finland, Sweden, and Norway), the Sami people continued to hunt rather than domesticate reindeer well into the 16th century AD (Langston, 2014). But when Europeans colonized Sápmi for mineral, forest, and agricultural resources, wild reindeer populations declined as the Sami and Europeans began hunting them past their ability to reproduce.

​The Sami developed a semi-domesticated relationship with reindeer to protect their remaining populations, shepherding them on their long migrations but never fully taming them as beasts of burden. By contrast, North American  caribou remain wild, and Indigenous peoples have not domesticated or tamed them. Yet the two species, humans and caribou, have developed extraordinarily close material and spiritual relationships across North America. For example, Gwich’in leader Sarah James said, “The Gwich’in are caribou people” who believe that “a bit of human heart is in every caribou, and that a bit of caribou is in every person.” (Gwich’in Steering Committee, 2005) According to anthropologist Piers Vitebsky, caribou have made human life across the Arctic possible as climates changed in the Late Pleistocene, allowing people to thrive in ecosystems that would otherwise have been uninhabitable (Vitebsky 2005).  

In the Lake Superior basin, a genetically distinct population of woodland caribou developed, ranging as far south as the Lower Peninsula of Michigan and as far north to Hudson’s Bay in Canada (see Map 1 above). Never particularly abundant in any place other than wolf-free islands—for low caribou numbers helped keep wolf populations low—woodland caribou were widespread across the northern forest. They flee from predators or find refugia in deep snow, windswept barrens, dense bogs, or rocky coastlines, where wolves falter. Predator avoidance was a strategy that served woodland caribou well in North America for over a million years, but now they are vulnerable to predation if their migration routes are cut off by development, or if predator populations increase with human disturbance. And many forms of human activity do increase predation.  Railroads, logging roads, forest conversions, and wetland drainage have offered easy access for human and canid predators. White tailed deer have expanded their range into caribou territory, for the edge habitat left by forestry serves them well. Deer in turn invite higher wolf populations, while also spreading a parasitic brain worm that kills caribou but not deer.

In 1850, the last woodland caribou vanished from Wisconsin. By 1912, caribou were gone from mainland Michigan. By 1928, the woodland caribou of Isle Royale in Lake Superior were extirpated (Dybas 2015). Minnesota caribou persisted for longer because vast bog and fen complexes provided refuges from hunting pressure and wolf predation. Wildlife managers tried hard to protect caribou in Minnesota, establishing the Red Lake game reserve in the early 1930s and resettling failed homesteaders and blocking their drainage ditches to restore bog habitat (G. D. Racey and Armstrong 2000).

​Still, by 1937 the last native band of woodland caribou in the bog area had dwindled to three cows in the Big Bog muskeg between Red Lake and Lake of the Woods. Farming and hydropower development along the Rainy River sliced off their traditional migration route between their calving grounds in Ontario and wintering habitat in Minnesota. Biologists translocated caribou from Saskatchewan to the Minnesota population starting in 1938, but without a connection to the Canadian calving ground, the population could not sustain itself (Bergerud and Mercer 1989; Manweiler 1938, 1941). Two individuals straggled down Minnesota’s rocky coastline near the Canadian border during the winter of 1980-1981. One was probably hit by a car and the other's fate is unknown. 
Picture
Recent range of Lake Superior woodland caribou. Currently, only small populations persit on Michipicoten Island southwest of Wawa Ontario and the Slate Islands sound of Terrace Bay. Note the discontinuous range between the Lake Superior population and the more northern populations. Credit: Ontario Ministry of Natural Resources and Forestry.

In the United States, woodland caribou are now a ghost species persisting only in place names and memories. Ontario’s population of woodland caribou have fared better, yet they too have retreated from most of Lake Superior. From 1880 to 1990, the “extinction of caribou crept northward at a rate of 34 kilometers per decade,” (Badiou and et al., 2011). By 1912, the Lake Superior woodland caribou had been hunted out from the western shore and Thunder Bay region. In  the Lake Nipigon watershed just north of Lake Superior, caribou thrived until the Canadian National Railway came through in 1910, and then that population dwindled. Along the north central and north eastern shores of Lake Superior, woodland caribou range had remained continuous all the way north to Hudson’s Bay. But after World War II, mineral, forest, and energy development fragmented their range, and caribou populations became discontinuous, with the Lake Superior population cut off from the more northern populations. This wasn’t a northwards migration, with caribou expanding into new territory. Rather, it was a cascade of local extinctions driven by hunting, predation, and habitat loss.

Well into the 2000s, Lake Superior woodland caribou seemed to be hanging on. Populations persisted in Pukaskwa National Park and on refugia in the Slate Islands and Michipicoten Island. When wolves were absent, woodland caribou populations were able to increase exponentially. In the early 1980s, Gord Eason and other biologists translocated 9 caribou from the overpopulated Slate Islands to then wolf-free Michipicoten Island, and by 2012, the population had grown to nearly 1000 individuals. On the much smaller Slate Islands, free from predators, caribou populations may have risen to as many as 600 individuals.

Wolves crossed ice bridges in the cold winter of 2013-2014, making their way onto the last two island refugia—Michipicoten Island 16 km off the coast, near Wawa Canada, and the Slate Islands group, about 12 km off the coast near Terrace Bay. Within two years, the  population of perhaps 1600 caribou crashed to a couple dozen individuals on Michipicoten Island and several lone males on the Slates. In January 2018, biologists calculated that the entire Lake Superior woodland caribou population might have, at best, two more weeks left on earth before it went extinction.

Soon after wolves appeared on the islands in 2014, an informal coalition of the Michipicoten First Nations Community, local cottagers, and caribou biologists had begun petitioning the Ministry to protect the Lake Superior caribou from extinction. Options included culling the island wolves (unpopular with environmentalists) or moving wolves to Isle Royale in Michigan, where the US National Park service had been spending years trying to decide whether to restore them. Alternatively, if wolf control was impossible, then caribou could once again be translocated to wolf-free islands.

The Ministry, however, refused to act. In public, their biologists insisted that watching predation play out would be an interesting scientific experiment. Let natural processes find an equilibrium, one Ministry biologist suggested. In private memos that the caribou advocates obtained through Freedom of Information Requests, ministry personnel suggested that they expected extinction and were unwilling to expend resources to prevent it. 
Picture
Caribou on Michipicoten Island taken in Quebec Harbor. Credit: Christian Schroeder.

After an international media campaign that caught the attention of the New York Times, the Globe and Mail, and numerous regional media sources, the Ministry finally agreed to act. In three dramatic interventions, biologists captured caribou in large nets and helicoptered them over to wolf-free islands. Several were taken back to the Slates archipelago, where wolves had vanished after eating most of the caribou, and others to Caribou Island 35 km south of Michipicoten Island.

So far, most of the caribou seemed to have survived the translocations. Calf tracks were found on a beach on the Slates this summer. Perhaps the population won’t face extinction just yet; perhaps woodland caribou won’t completely be extirpated from the Great Lakes region.

Why was the Ministry so slow to respond? Local resident Christian Schroeder, an advocate for translocation, suspected that many biologists and policymakers within the Ministry share a perception with environmental NGOs that climate change will inevitably doom the caribou. “Caribou in Canada may be doomed by climate change and habitat loss” proclaimed one headline in Nature World News on Dec. 16, 2013 (Foley 2013). A scientific paper in Rangifer projected “complete loss of woodland caribou in Ontario if winter temperatures increase by more than 5.6º C by 2070,”(Masood et al. 2017). If caribou really are doomed by warming in the Anthropocene, expending significant resources to save them might seem a waste of money and effort.

Leo Lepiano, lands and resources consultation coordinator for the Michipicoten First Nation, notes that if the caribou vanish, many management dilemmas for the Ministry vanish as well. Currently, development in woodland caribou range cannot exceed 35% of the landscape. That limit would be lifted for the so-called “discontinuous range” between Lake Superior and the northern populations in if Lake Superior caribou died. More of the boreal forest could be opened to transmission line development, intensive forestry, and Ring of Fire mining expansion.

If woodland caribou really are doomed by climate change, why expend resources and slow development save the last few? In fact, caribou, given half a chance, may be far less vulnerable to climate change than other northern species. Physiologically, unlike moose that cannot forage well as temperatures warm, woodland caribou don’t experience thermal stress—at least not in the range of temperatures predicted for Lake Superior. Moose begin to experience thermal stress at 14ºC, with open-mouthed panting and reduced foraging. Caribou, however, don’t show measurable physiological responses until temperatures skyrocket to 35ºC (Racey 2005, Yousef and Luick 1975).

Popular perception holds that wintering woodland caribou require old growth boreal forest with abundant lichens. Such forest types are unlikely to persist along much of Lake Superior in a warming climate. But while caribou select lichens in winter if they’re available, they readily adapt to other habitats as well. Michipicoten Island, for example, has a Laurentian mixed forest dominated by hardwoods, and caribou thrive there in the absence of wolves.

Caribou responses to historical climate change offer clues to how caribou might respond now. At the end of the last glacial maximum, the fossil record shows that caribou vanished from their warming refugia. Some anthropologists interpret this climate history as evidence of the profound vulnerability of caribou to future climate change (Grayson and Delpech 2005). But this interpretation reduces the agency of caribou themselves. Post-glacial caribou didn’t simply go extinct (as they later did in the 20th century Lake Superior range reductions). Rather, caribou chased the melting ice north, exploring new environments that were opening up as the climate warmed and expanding their range across the circumpolar north. North American caribou populations actually expanded in size at the end of the Pleistocene, as other Pleistocene megafauna (with the exception of brown bear and tundra muskox) went extinct. Migration was central to caribou post-Pleistocene resiliency, suggesting that they can  be resilient if their habitats are connected (Mann et al. 2015).

Americans typically imagine caribou as creatures of distant wilderness, a remnant of primeval nature that was irrevocably lost to industrialization. Woodland caribou, in this discourse, need vast, untouched wilderness and will be doomed by climate change and the Anthropocene.  These beliefs are powerful, but they are flawed—and they let agencies avoid taking pragmatic actions today to restore woodland caribou. There’s nothing inevitable or mysterious about the demise of woodland caribou. Specific policy decisions led to their declines in the 20th century, and reversing those policy decisions has the potential to lead to their rebound, but not if we continue imagining woodland caribou as creatures of an untouched primeval forest.

Current rhetoric about woodland caribou mirrors the rhetoric of early conservationists in the late 19th century. As Teddy Roosevelt wrote in 1902, “it would seem the race must become extinct in a comparatively brief period” (pg. 39).  When agencies and NGOs talk about woodland caribou as too vulnerable to be sustained in the Anthropocene, it becomes a self-fulfilling prophecy. Agencies and environmental groups become reluctant to restore them, for that suggests continuing care, a need to keep investing time, resources, and energy to manage predators and migration routes. For woodland caribou to thrive, we need to rethink assumptions about the need for continuing human stewardship of migratory wildlife in a warming world.
​
Caribou will indeed dwindle in a warming world if we restrict their migrations and refuse to manage their predators. But climate change should not be an excuse to give up on the management strategies here and now that could keep them from extinction. Climate change isn’t going to doom woodland caribou. Human policy decisions, however, might.

Nancy Langston is Distinguished Professor of Environmental History at Michigan Technological University. Her most recent book is Sustaining Lake Superior: An Extraordinary Lake in a Changing World (Yale 2017). She is currently working on an environmental history of woodland caribou and common loons in the Anthropocene.

Works Cited:

Badiou, Pascal, and et al. 2011. “Keeping Woodland Caribou in the Boreal Forest: Big Challenge, Immense Opportunity.” International Boreal Conservation Panel.

Bergerud, A. T., and W. E. Mercer. 1989. “Caribou Introductions in Eastern North America.” Wildlife Society Bulletin (1973-2006) 17 (2): 111–20.

Dybas, Cheryl Lyn. 2015. “Last of the Gray Ghosts: Uncovering the Secret Lives of Our Woodland Caribou.” Lake Superior Magazine. October 1, 2015. http://www.lakesuperior.com/api/content/abae85ae-16cd-11e6-ad59-22000b078648/.

Foley, James. 2013. “Caribou in Canada May Be Doomed by Climate Change and Habitat Loss.” Nature World News, December 16, 2013. https://www.natureworldnews.com/articles/5322/20131216/caribou-canada-doomed-climate-change-habitat-loss.htm.

Grayson, Donald K. and Franciouse Delpech. 2005. “Pleistocene Reindeer and Global Warming.” Conservation Biology 19 (2): 6.

Gwich’in Steering Committee. 2005. “A Moral Choice for the United States: The Human Rights Implications for the Gwich’in of Drilling in the Arctice National Wildlife Refuge.”

Langston, Nancy. 2016. “Mining the Boreal North.” American Scientist, June. https://www.americanscientist.org/article/mining-the-boreal-north.

Mann, Daniel H., Pamela Groves, Richard E. Reanier, Benjamin V. Gaglioti, Michael L. Kunz, and Beth Shapiro. 2015. “Life and Extinction of Megafauna in the Ice-Age Arctic.” Proceedings of the National Academy of Sciences 112 (46): 14301–6. https://doi.org/10.1073/pnas.1516573112.

Manweiler, J. 1938. “Wildlife Management in Minnesota’s ‘Big Bog.’” The Minnesota Conservationist, 14–15.

———. 1941. “Minnesota’s Woodland Caribou.” The Conservation Volunteer 1 (4): 34–40.

Masood, Sara, Thomas M. Van Zuiden, Arthur R. Rodgers, and Sapna Sharma. 2017. “An Uncertain Future for Woodland Caribou (Rangifer Tarandus Caribou): The Impact of Climate Change on Winter Distribution in Ontario.” Rangifer 37 (1): 11–30. https://doi.org/10.7557/2.37.1.4103.

Racey, G. D., and T. Armstrong. 2000. “Woodland Caribou Range Occupancy in Northwestern Ontario: Past and Present.” Rangifer 20 (5): 173–84. https://doi.org/10.7557/2.20.5.1643.

Racey, Gerald D. 2005. “Climate Change and Woodland Caribou in Northwestern Ontario: A Risk Analysis.” Rangifer 25 (4): 123–36. https://doi.org/10.7557/2.25.4.1777.

Roosevelt, Theodore. 1902. The Deer Family,. New York: Macmillan and Co., Ltd.

Vitebsky, Piers. 2005. The Reindeer People: Living with Animals and Spirits in Siberia. 1st Edition edition. Boston: Houghton Mifflin Harcourt.

Yukon Beringia Interpretative Center http://www.beringia.com/exhibit/ice-age-animals/caribou)

Ecological Militarism: The Unusual History of the Military’s Relationship with Climate Change

5/25/2018

 
Adeene Denton, Brown University
Picture
A view of the National Hurricane Center in its early years.

​Many historians have discussed the influence of the Cold War on the development of specific disciplines within the broader field of earth science.  However, few have touched on U.S. military’s study of and attempt to capitalize on climate change, an interest that accelerated rapidly during the Cold War. The decades-long studies sponsored by the Departments of Defense and Energy during and after the Cold War produced a wide array of attempts to transform the earth itself into a political and environmental weapon.
 
The concept of anthropogenic climate change (also known as global warming) captured the world’s attention when James Hansen and other scientists testified before Congress in a series of hearings between 1986 and 1988. However, it had for decades been a subject of debate in smaller scientific, political, and corporate communities. Throughout the Cold War, American political and military leaders considered the potential of specifically directed, human-engineered climate change. They believed that cloud seeding and the atomic bomb, among other tools, would allow them to wield the geological force necessary to control Earth’s climate.
 
Anthropogenic climate change was therefore a concept that excited many U.S. military planners during the early Cold War. Yet these planners, and the scientists and politicians who supported their efforts, struggled with the colossal scale of their desires. They yearned to use human technology to shape the Earth to their political will, but found themselves stymied by the very technologies and politics they sought to control.
When the Earth Became Global
For a science that is built on the concept of change over inconceivably long timescales, earth science developed at a breakneck pace during the Cold War. As earth science grew both in numbers of people working under its banner and the amount of data they had at their disposal, the field subdivided rapidly (in a case of science imitating life). The 1950s saw a fascinating dual development within earth science: scientists were increasingly recruited to work with and for their national militaries, even as they developed datasets and connections with other scientists that were global in nature.
 
Scientists who wanted to study the history of the earth were looking for datasets that spanned the world, not just their country’s borders. They could only compile them through extensive collaboration with other nations, on the one hand, or, on the other, the vast quantities of funding and manpower that only a major military could offer. Oceanographers chose the latter option as their best chance for technological exploration of the oceans. U.S. naval vessels became the hosts of scientific research cruises, as they had the greatest mobility and most advanced technology of any ships in the world. 
 
Many scientists collecting these data were loath to discuss the tensions between their research and any political agendas, but it was certainly on their minds – and on the minds of their benefactors. For the Navy and the other branches of the U.S. military, scientific projects typically served multiple purposes. The data collected provided both a research boost to a specific scientific community, and information that the military might be able to use. Radio arrays in the Caribbean, for example, which were ostensibly used for ocean floor sounding and bathymetric mapping, also scoured the sea for Soviet submarines. Overall, the military hoarded big data about the earth and its climate for use in future tactical and strategic plans. Climate and environmental science became yet another venue in which the Cold War was fought. Yet the relationship between climate science and the political interests that funded it was an uneasy one.
 
When researchers in conversation with the military realized that humanity was becoming a force that could act on a geologic scale, the possibility of extending U.S. control to the environment became extremely appealing to military planners. As American oceanic scientists saw the bathymetry of the seafloor for the first time, the political and military forces in Washington were haggling over just how much ocean the U.S. could control outside its borders. American oil companies like Chevron and Mobil became technological giants over the course of the Cold War by expanding their search for petroleum to South America and Africa, and their profits seemed to suggest that the earth’s interior was also within the scope of human knowledge and jurisdiction. After 1957, the dawn of the space age seemed to herald the beginning of total surveillance from above, and for the military planetary surveillance was the beginning of integrated planetary control. 
 
The more scientists discovered about the earth on which they lived, the more their military partners sought to use that information to bring the earth to heel. All of these ideas seemed to coalesce together during the Cold War, yielding decades of oscillating cooperation and struggle between the U.S. military and the scientists it patronized.
Picture
Maurice Ewing in 1948. Ewing was one of the many earth scientists who used navy vessels to obtain powerful oceanic datasets. Columbia University.

The Military as a Geological Force
​During the early Cold War, U.S. military planners often proposed schemes to transform environments on immense scales, only to quickly abandon them either in the proposal stage or after initial testing. The initial popularity of such ideas, as well as their typically quick demise, owed much to military ambitions far exceeding capability. Environmental control was an undeniably powerful concept, as it promised ways to turn the tides of war through untraceable methods, or from continents away. Unfortunately, the military’s tactical plans to utilize newfound climate information often took unusual (and unusable) turns because much of the information was very new, and because experts consulted were not always well versed in the information they handled. Climate science was a field in its infancy, and not everyone who claimed to speak on its behalf understood the data.
 
A classic example of this phenomenon was proposal developed by Hungarian-American polymath John von Neumann to spread colorants on the ice sheets of Greenland and the Antarctic. By darkening the ice, the military could reduce their reflectivity, or albedo, which would warm the poles, melt the ice, and ultimately flood the coastlines of hostile nations. It was an absurd idea proposed by a brilliant physicist who did not yet grasp how global the effects of such a plan would be. Melting of the Greenland ice sheet in particular would have drastic impacts on the North American continent as well as the intended target. 
 
The U.S. Departments of Energy and Defense, as well as President Eisenhower, were also interested in using humanity’s newfound power for good, however. Operation Plowshare, for example, was the name given to a decades-long series of attempts to use nuclear explosives for peaceful purposes, particularly construction. It resulted in of proposals such as Project Chariot in 1958, which called for the use of five thermonuclear devices to construct a new harbor on the North Slope of Alaska. Scientists were often split in their reactions to these proposals, a conflict provoked by their valuable relationship with the U.S. government, on the one hand, and risk to terrestrial environments they were only beginning to understand, on the other.
Picture
The original Project Chariot design, which would have utilized 2.4 megatons of explosives.

Mud, Not Missiles: Weaponizing Weather in Vietnam​
When the U.S. military started seriously considering the possibility of human-driven climate warfare, its planners focused on a concept that has been human minds for centuries: controlling the weather. Weather modification has preoccupied scientists, politicians, and the military in the U.S. since James Espy, the first “national meteorologist” employed by the military, studied artificially-produced rain in the 1840s. In the early Cold War, such projects continued to intrigue military planners.
 
In 1962, the U.S. military launched Project Stormfury, an attempt by researchers at the Naval Ordinance Test Station (NOTS) to test weather control by seeding the clouds of tropical cyclones. They hoped to weakened hurricanes that regularly wreaked havoc on the southern and eastern coasts of the United States. They theorized that the addition of silver iodide to hurricane clouds would disrupt the inner structure of the hurricanes by freezing supercooled water inside. Yet their cloud seeding flights revealed that the amount of precipitation in hurricanes did not appear to correlate at all with whether a cloud had been seeded or not.
 
In the meantime, however, members of the U.S. high command used the theory behind Stormfury as a basis for two similar operations in Asia: Projects Popeye and Gromet. Despite Project Stormfury’s failure to deliver measurable results, the need for any kind of interference that could harry the Viet Cong led military planners to rush Popeye into the testing phase. The scientists recruited to assist with Popeye slightly modified Stormfury’s cloud seeding approach. They decided to use lead iodide and silver iodide in large, high-altitude, cold clouds, which (in theory) would then “blow up” and “drop large amounts of rain” over an approximately targeted area.
 
If successful, Popeye would increase the rainfall during the monsoon season over northern Vietnam, hampering their forces by destroying their supply lines. This would lengthen the monsoon season, which would force the Vietcong to deal with landslides, washed out roads, and destroyed river crossings. American officials had also promised the Indian government that they could seed clouds to end a crippling drought in India. Cloud seeding, military planners, could both win allies and cripple enemies.
 
Despite the eagerness and ambition with which the U.S. military undertook testing of this method in both India (with government permission) and Laos (without informing the Laotian government), it was ultimately unclear whether these attempts at “rainmaking” were effective at all. The utmost secrecy with which Projects Gromet (in India) and Popeye (in Laos and Vietnam) were undertaken limited attempts to measure and verify their success. Gromet alone cost a minimum of $300,000 (nearly $2 million in present-day US dollars), yet by 1972 U.S. officials had to concede that its effectiveness had been unclear at best. The Indian drought ended, but no one could say whether it was the U.S. that had done the job.
Picture
1966 photo of the crew and personnel of Project Stormfury. National Oceanic and Atmospheric Administration.

Politics, Military, and Oceans
​For scientists, the ocean represents a crucial biological and chemical reservoir whose massive size makes any fluctuation of oceanic conditions a crucial aspect of climate change. In the early Cold War, the oceans also became a focus for the development of poorly conceived climate control plans, as well as a site for political posturing. There were two basic prongs to the American (as well as other countries’) political and military approach to the oceans during the Cold War. First, the oceans were seen as a way to extend a country’s sovereign borders, and second, as a mechanism for disposing of unsavory nuclear waste.
 
As American scientists followed the Navy to exceedingly remote places in search of new datasets, the question of nationalism followed them. Where could the Navy “plant the flag” as part of its surveys? Polar scientists who sought direct access to their regions of interest – the Arctic and Antarctic – were hamstrung by the security interests of not just their own nations, but also of others. The U.S. government, which noted the conveniently large strip of polar access given by USSR’s ~7,000 km of Arctic coastline, pushed to extend its sovereignty as far off of Alaska’s northern continental shelf as it could. Where scientists saw the Arctic as a fascinating environment and ecosystem, the U.S. military saw a direct route to its biggest enemy.
 
In Antarctica, meanwhile, by the International Geophysical Year (1957-1958) over seven different countries had laid claim to large swaths of the frozen continent. The British had already secretly built a base on Antarctic Peninsula during World War II to supersede other claims to the area. Establishing the Antarctic continent as a zone that was to be as free from geopolitics as possible (as well as exploitative capitalist interests) was a difficult task, and one that took decades. It took until the Clinton administration for oil companies to be officially banned from prospecting on or near the continent, essentially reserving Antarctica as a place where only collaborative science could reign.
 
As the U.S. and other major powers jockeyed with each other for territory in the most remote areas of the world, they also used the ocean as a garbage disposal for some of humanity’s most toxic waste. Between 1946 and 1962, the United States dumped some 86,000 containers of radioactive waste into the oceans, while Britain, the USSR, and other developing nuclear powers did much the same. Meanwhile, scientists from the International Scientific Committee on Ocean Research started collecting data on the possible dangers associated with radioactive waste. However, governments funding their research had little interest in the results until U.S. waste washed back up onto American shores where local fishermen found and identified it.
 
For years, the ocean was convenient to U.S. officials. Its volume seemed limitless: perfect for permanently keeping radioactive waste, and any information about it, from the public eye.  Fortunately, oceanic waste dumping did not stay a secret forever. There was, it seemed, no convenient way to dispose of radioactive waste. Dumping it on land provoked public criticism at home, and dumping it in the oceans invited international criticism, particularly from the Soviet Union, whose government claimed to have never done such a thing. In fact, it did; the USSR sank eighteen nuclear reactors in addition to packaged waste, a fact only revealed in declassified archives after the Soviet Union collapsed.
 
To describe the relationship between the U.S. government and the oceans during the Cold War as fraught would be an understatement. The Navy wanted the ocean to be an effective source of information on Soviet activities, a convenient landfill, and a platform to extend American political authority. In the end, the Navy could not have it all. By the end of the Cold War, the Navy and its political supporters had to concede to public and scientific pressure to back away from large-scale projects that overtly exploited the ocean. The power jockeying and technological exploitation during the Cold War did have lasting effects, however. Today, the oceans remain a site of intense monitoring and political grandstanding.
Picture
A comparison of the seven land grabs made by countries in Antarctica. The U.S. and U.S.S.R. did not stake any specific claims. Image from Nature Geoscience.

The Cold War and the Warm Future
​In his speech to the National Academy of Sciences in 1963, President Kennedy noted that human science could now “irrevocably alter our physical and biological environment on a global scale.” This was a fundamental realization – that humans could change the world, and they could do it in a matter of minutes to years if they chose. The Cold War forced scientists and their military benefactors to realize that humans had become more efficient at shaping the Earth than most geological forces in existence. It was tempting, then, for Cold War militaries to investigate just how far that power could go, in both destructive and constructive ways.
 
Can we lengthen or shorten the seasons? The military tried it. Can we disappear our worst waste in the oceans? Every country with nuclear waste tried it.  How much do we need to know about the environment before we can begin to reshape whole regions to suit nationalistic goals and objectives? For the military during the Cold War, the answer was almost always “we know enough.” For some of the scientists they employed, and many more whom they didn’t, the answer was “we may never know enough.”
 
Popular discussions rarely touch on the outlandish attempts to control nature during the early Cold War. In our present age of polarization around the issue of climate change, perhaps they should. Politicians and scientists too often assume that humanity will someday engineer a solution to climate change, but the Cold War’s history reveals that our grandest schemes may be the most susceptible to failure. 

Selected References
​Doel, R.E. and K.C. Harper (2006). “Prometheus Unleashed: Science as a Diplomatic Weapon in the Lyndon B. Johnson Administration.” Osiris 21, 66-85.
 
Fleming, J.R. Fixing the Sky: The Checkered History of Weather and Climate Control. Columbia University Press: New York City. 2010.
 
Hamblin, J.D. (2002). “Environmental Diplomacy in the Cold War: The Disposal of Radioactive Waste at Sea during the 1960s.” The International History Review 24 (2), 348-375.
 
Marzec, R.P. Militarizing the Environment: Climate Change and the Security State.  University of Minnesota Press: Minneapolis. 2015.
 
Naylor, S., Siegert, M., Dean, K., and S. Turchetti (2008). “Science, geopolitics and the governance of Antarctica.” Nature Geoscience 1.
 
O’Neill, Dan (1989). “Project Chariot: How Alaska escaped nuclear excavation.” Bulletin of the Atomic Scientists. 45 (10).
 
“Text of Kennedy’s Address to Academy of Sciences,” New York Times, Oct 23, 1963, 24.

Introducing the Tipping Points Project

4/30/2018

 
Dr. Dagomar Degroot, Georgetown University
The Tipping Points map at the time of publication (April 30th, 2018).

In late 2016, Randall Bass, vice provost for education at Georgetown University, asked me to help design and teach a pilot project at Georgetown University that would experiment with a new way of introducing climate change to undergraduate students. The Core Pathway on Climate Change initiative, as we came to call it, ultimately allowed students to mix and match seven-week courses - "modules" - to find their own pathway through the scholarship of climate change. Each module explored climate change from a different disciplinary vantage point, from English Literature through Environmental History and the Earth Sciences. 

Students could, for example, mix modules on literature and climate change with modules on the theology and philosophy of climate change. Alternatively, they could select modules on the physics and chemistry of climate change with environmental science modules that survey the impacts of global warming on water use and the ecology of cities. We scheduled every module for the same time, and we capped each at around 20 students. The program attracted over 100 students, many of whom were eager to learn more about climate change and anxious to be part of solving its pressing challenges. 

After and in some cases during every module, all the students in the program convened for "integrative days" in large halls or auditoriums. Often, we would guide students through an activity that encouraged them to draw on the distinct disciplinary insights they had learned in their modules. In our final integrative day, former Vice President Al Gore joined us for a series of meetings and talks that addressed the gravity of the climate change crisis and the prospects for overcoming it. 
Picture
Al Gore gives his final talk at the last integrative day of the Core Pathways Program at Georgetown University.

I taught two environmental history modules in the Core Pathway on Climate Change: one that surveyed how societies coped with the climatic cooling of the Little Ice Age from the thirteenth through the nineteenth centuries, and another that explored the causes, consequences, and controversies of anthropogenic global warming. Both modules cover huge topics, of course, but the seven-week format actually helped me emphasize what was most important about them.

Students in my Little Ice Age module learned how scholars work together to reconstruct past climate changes, studied what the history of natural climatic variability tells us about the causes of climate change, and evaluated what made societies vulnerable - or resilient - in the face of climatic cooling. Students in my global warming module learned about the "discovery" of global warming; weighed scholarship about its environmental and human consequences; traced the history of "geoengineering" schemes, and debated the causes for government inaction. 

The innovative design of the Core Pathways Program and the quality of our students also encouraged me to experiment with different kinds of assignments. Early in 2017, I won a Georgetown Environment Initiative grant for a new initiative - the "Tipping Points Project" - that aims to raise popular awareness about the consequences of global climatic trends for local communities. My grant covers events at Georgetown that connect climate scholars who actively reach out to the public, but from the start I imagined that the heart of the Tipping Points Project would be a map littered with icons that directed visitors to short, jargon-free articles on the impacts of climate change in local communities. The focus would be on the United States: still the world's superpower, and the only country poised to withdraw from the Paris Agreement on Climate Change. 

As I prepared to teach my first Core Pathways modules, I realized that some of my assignments could require my students to write first drafts of these articles. With that in mind, I created a first edition of the Tipping Points map and website. I added a page to the website that listed online tools that visitors and students could easily use to reconstruct and project climatic trends - temperature, precipitation, sea levels, and more - in local communities. To see if students could actually use these tools to write compelling articles - and to give them templates for those articles - I drafted two short pieces for the website. The first examined how climate change would likely impact the environments and people of Washington, DC, while the second explored the impacts of past climate change in Tulare County, California. 
Picture
Many students gravitated towards the the Climate Central "Surging Seas, Mapping Choices" program, which represents the impacts of rising sea levels on coastal communities, in different emissions scenarios (and absent adaptive responses). 

Students drew on these templates to write their own Tipping Points articles. In our Little Ice Age module, they used the tools on the Tipping Points website to reconstruct the impacts of past climate change in local communities, while in our Global Warming module, they used other tools to project the consequences of climate change, sometimes in the same communities. Often, they chose to write about counties and cities that had special significance for them: hometowns, places they visited, places they aspired to live in. Sometimes, they wrote for family members they hoped to persuade. 

I stressed that every article should follow a simple format. It had to have three parts devoted to, first, the impacts of climate change on local environments; second, how we know that those impacts had happened or would likely happen; and third, the likely consequences of those environmental impacts for local communities. I challenged students to avoid jargon while clearly explaining the mechanisms behind the relationships they uncovered. It was not enough for them to mention, for example, that global warming would likely make severe hurricanes more common. They had to explain how warmer waters, changing atmospheric circulation, and rising sea levels would likely all play a role in intensifying the worst hurricanes and magnifying their human impacts.

Perhaps above all, students learned just how difficult - but how important - it can be to communicate complex ideas in plain English. 

In the end, my students submitted roughly seventy Tipping Points articles in my modules. As I painstakingly edited and expanded each article, I decided that the icons on our map should convey something about the articles to which they linked. First, I opted to color-code the icons: blue for articles that dealt with past climate change, and red for articles that projected future climate change. I figured out a way to allow visitors to view only articles about past or present climate change, if they played with the filters on our map.

Second, I decided to use icons that visualize the major weather trends in each article. A thermometer represents temperatures changes; waves represent rising sea levels; cyclones represent hurricanes; rain drops represent liquid precipitation; and suns represent droughts (okay, that one is a little less straightforward). As I populated the Tipping Points map with icons, I was struck by how clearly it visualized the extent to which climate change had already altered environments and impacts communities across the United States. Droughts already routinely stretch across the southwest; hurricanes and rising sea levels already imperil the east coast. And of course, there is so much more to come. 

The Tipping Points map currently features icons that link to nearly twenty student-written, and professor-revised, articles. Each include striking visualizations, many created by students using the accessible tools listed on our website. By the end of the summer, the map should be teeming with as many as seventy icons. Bathsheba Demuth - our assistant director at HistoricalClimatology.com and the Climate History Network - may also ask her students at Brown University to write Tipping Points articles. That would allow us to host over 100 articles by spring 2019. 

Going forwards, we will use our online resources - including our social media feeds - to direct visitors to new articles as they come online. We will permanently link to the Tipping Points project under the "resources" tab of HistoricalClimatology.com. Ultimately, I hope that the Tipping Points map will provide a first stop for ordinary people interested in the impact of climate change in their communities. It may also serve as an example of the ways in which instructors can use simple, digital platforms to allow undergraduate students to create resources that have both pedagogical value in the classroom, and practical value in the real world. 

​So please, visit the Tipping Points website, and stay tuned for much more! ​
<<Previous
Forward>>

    ​Archives

    June 2020
    March 2020
    December 2019
    August 2019
    July 2019
    April 2019
    March 2019
    February 2019
    January 2019
    November 2018
    October 2018
    August 2018
    July 2018
    May 2018
    April 2018
    March 2018
    February 2018
    December 2017
    November 2017
    October 2017
    September 2017
    July 2017
    June 2017
    May 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    September 2016
    June 2016
    May 2016
    April 2016
    February 2016
    December 2015
    November 2015
    October 2015
    September 2015
    July 2015
    March 2015
    February 2015
    January 2015
    December 2014
    October 2014
    September 2014
    August 2014
    July 2014
    April 2014
    March 2014
    January 2014
    November 2013
    September 2013
    March 2013
    February 2013
    January 2013
    October 2012
    September 2012
    August 2012
    April 2012
    March 2012
    November 2011
    September 2011
    March 2011
    December 2010
    October 2010

    Categories

    All
    Africa
    Animal History
    Anthropocene
    Arctic
    Australia
    China
    Climate And Conflict
    Climate And Memory
    Climate History
    Climate Migration
    Climate Policy
    Climate Risks
    Climatology
    Columbian Exchange
    Conferences
    Dendrochronology
    Energy
    Environmental History
    Field Work
    Geoengineering
    Glaciology
    Global Warming
    Historical Climatology
    History Of Science
    Interdisciplinary Methodology
    IPCC
    Little Ice Age
    Maps
    Medieval
    Methodology
    Nuclear Power
    Paleoclimatology
    Pedagogy
    Politics Of Climate Change
    Resilience And Adaptation
    Volcanoes
    Weather Modification

    RSS Feed

  • Home
    • Archived Best of the Web
  • Features
    • Archived Features
  • Interviews
    • Climate History Podcast
  • Projects
  • Resources
    • Tools
    • Databases >
      • CLIWOC
    • Bibliography
    • Videos
    • Links
    • Tipping Points
  • Network
    • On Facebook
  • About
    • Our Team
    • Definitions