Prof. María Cristina García, Cornell University. People displaced by extreme weather events and slower-developing environmental disasters are often called “climate refugees,” a term popularized by journalists and humanitarian advocates over the past decade. The term “refugee,” however, has a very precise meaning in US and international law and that definition limits those who can be admitted as refugees and asylees. Calling someone a “refugee” does not mean that they will be legally recognized as such and offered humanitarian protection. The principal instruments of international refugee law are the 1951 United Nations Convention Relating to the Status of Refugees and its 1967 Protocol, which defined a refugee as: "any person who owing to well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable, or owing to such fear, is unwilling to avail himself of the protection of that country; or who, not having a nationality and being outside the country of his former habitual residence as a result of such events, is unable or, owing to such fear, is unwilling to return to it." [i] This definition, on which current U.S. law is based, does not include any reference to the “environment,” “climate,” or “natural disaster,” that might allow consideration of those displaced by extreme weather events and/or climate change. In some regions of the world, other legal instruments have supplemented the U.N. Refugee Convention and Protocol, and these instruments offer more expansive definitions of refugee status that might offer protections to the environmentally displaced. The Organization of African Unity’s “Convention Governing the Specific Aspects of Refugee Problems in Africa (1969)” includes not only external aggression, occupation, and foreign domination as the motivating factors for seeking refuge, but also “events seriously disturbing the public order.”[ii] In the Americas, the non-binding Cartagena Declaration on Refugees (1984), crafted in response to the wars in Central America, set regional standards for providing assistance not just for those displaced by civil and political unrest but also those fleeing “circumstances which have seriously disturbed the public order.”[iii] The Organization of American States has also passed a series of resolutions offering member states additional guidance on how to respond to refugees, asylum seekers, stateless persons, and others in need of temporary or permanent protection. In Europe, the European Union Council Directive (2004) has identified the minimum standards for the qualification and status of refugees or those who might need “subsidiary protection.”[iv] Together, these regional and international conventions, protocols, and guidelines acknowledge that people are displaced for a wide range of reasons and that they deserve respect and compassion and, at the bare minimum, temporary accommodation. Climate change has been absent in these discussions perhaps because environmental disruptions such as hurricanes, earthquakes, and drought were long assumed to be part of the “natural” order of life, unlike war and civil unrest, which are considered extraordinary, man-made, and thus avoidable. The expanding awareness that societies are accelerating climate change to life-threatening levels requires that countries reevaluate the populations they prioritize for assistance, and adjust their immigration, refugee, and asylum policies accordingly. Under current U.S. immigration law, those displaced by sudden-onset disasters and environmental degradation do not qualify for refugee status or asylum unless they are able to demonstrate that they have also been persecuted on account of race, religion, nationality, membership in a particular social group, or political opinion. This wasn’t always the case: indeed, U.S. refugee policy once recognized that those displaced by “natural calamity” were vulnerable and deserved protection. The 1953 Refugee Relief Act, for example, defined a refugee as “any person in a country or area which is neither Communist nor Communist-dominated, who because of persecution, fear of persecution, natural calamity or military operations is out of his usual place of abode and unable to return thereto… and who is in urgent need of assistance for the essentials of life or for transportation.”[v] The 1965 Immigration Act (Hart-Celler Act) established a visa category for refugees that included persons “uprooted by catastrophic natural calamity as defined by the President who are unable to return to their usual place of abode.” [vi] Between 1965 and 1980, no refugees were admitted to the United States under the “catastrophic natural calamity” provision but that did not stop legislators from opposing its inclusion in the refugee definition. Some legislators argued that it was inappropriate to offer permanent resettlement to people who were only temporarily displaced; while others took issue on the grounds that it undermined the economic recovery of hard-hit countries by draining them of their most highly-skilled citizens. The 1980 Refugee Act subsequently eliminated any reference to natural calamity or disaster, in line with the United Nation’s definition of refugee status. In recent decades, scholars, advocates, and policymakers have called for a reevaluation of the refugee definition in order to grant temporary or permanent protection to a wider range of vulnerable populations, including those displaced by environmental conditions. At present, U.S. immigration law offers very few avenues for entry for the so-called “climate refugees”: options are limited to Temporary Protected Status (TPS), Delayed Enforced Departure (DED), and Humanitarian Parole. The 1990 Immigration Act provided the statutory provision for TPS: according to the law, those unable to return to their countries of origin because of an ongoing armed conflict, environmental disaster, or “extraordinary and temporary conditions” can, under some conditions, remain and work in the United States until the Attorney General (after 2003, the Secretary of Homeland Security) determines that it is safe to return home. [vii] There is one catch: in order to qualify for TPS one already has to be physically present in the United States—as a tourist, student, business executive, contract worker or even as an unauthorized worker. TPS is granted on a 6, 12, or 18-month basis, renewed by the Department of Homeland Security (DHS) if the qualifying conditions persists. TPS recipients do not qualify for state or federal welfare assistance but they are allowed to live and work in the United States until federal authorities determine that it’s safe to return. In the meantime, they can send much-needed remittances to their families and communities back home to assist in their recovery. TPS is one way, albeit imperfect, that United States exercises its humanitarian obligations to those displaced by environmental disasters and climate change. It is based on the understanding that countries in crisis require time to recover; if nationals living abroad return in large numbers, in a short period of time, they can have a destabilizing effect that disrupts that recovery. Countries affected by disaster must meet certain conditions in order to qualify: first, the Secretary of Homeland Security must determine that there has been a substantial disruption in living conditions as a result of a natural or environmental disaster, making it impossible for a government to accommodate the return of its nationals; and second, the country affected by environmental disaster must officially petition for its nationals to receive TPS status (a requirement that is not imposed on countries affected by political violence). However, environmental disaster does not automatically guarantee that a country’s nationals will receive temporary protection. The U.S. federal government has total discretion and the decision-making process is not immune to domestic politics. Deferred Enforced Departure (DED) is another status available to those unable to return to hard-hit areas: DED offers a stay of removal as well as employment authorization, but the status is most often used when TPS has expired. In such circumstances, the president has the discretionary (but rarely used) authority to allow nationals to remain in the United States in the interest of humanitarian or foreign policy, or until Congress can pass a law that offers a permanent accommodation. [viii] Humanitarian “parole” is yet another recourse for the environmentally displaced. The 1952 McCarran Walter Act granted the attorney general discretionary authority to grant temporary entry to individuals, on a case-by-case basis, if deemed in the national interest. Since 2002, humanitarian parole requests have been handled by the United States Citizenship and Immigration Services (USCIS), and are granted much more sparingly than during the Cold War. USCIS generally grants parole only for one year (renewable on a case-by-case basis). [ix] Parole does not place an individual on a path to permanent residency or citizenship, nor does it make applicants eligible for welfare benefits; only occasionally are “parolees” granted the right to work, allowing them to earn a livelihood and send remittances to communities hard hit by political and environmental disruptions. TPS, DED, and humanitarian parole are only temporary accommodations for select and small groups of people. They are an inadequate response to the humanitarian crisis that will develop in the decades to come. Scientists forecast that in an era of unmitigated and accelerated climate change, sudden-onset disasters will become fiercer, exacerbating poverty, inequality, and weak governance, and forcing many more people to seek safe haven elsewhere—perhaps in the hundreds of millions over the next half-century. In the current political climate, it’s hard to imagine that wealthier nations like the United States will open their doors to even a tiny fraction of these displaced peoples; however, the more economically developed countries must do more to honor their international commitments to provide refuge, especially to those in developing areas who are suffering from environmental conditions they did not create. In the decades to come, as legislators try to mitigate the effects of climate change and help their populations become resilient, they must also share the burden of a human displacement caused by the failure to act quickly enough. María Cristina García, an Andrew Carnegie Fellow, is the Howard A. Newman Professor of American Studies in the Department of History at Cornell University. She is the author of several books on immigration, refugee, and asylum policy. She is currently completing a book on the environmental roots of refugee migrations in the Americas. [i] United Nations, “Convention and Protocol Relating to the Status of Refugees,” 14, http://www.unhcr.org/en-us/3b66c2aa10. The 1951 Convention limited the focus of assistance to European refugees in the aftermath of the Second World War. The 1967 Protocol removed these temporal and geographic restrictions. The United States did not sign the 1951 Convention but it did sign the 1967 Protocol.
[ii] The OAU convention stated that the term refugee should also apply to “every person who, owing to external aggression, occupation, foreign domination or events seriously disturbing the public order in either part or the whole of his country or origin or nationality, is compelled to leave his place of habitual residence in order to seek refuge in another place outside his country of origin or nationality.” Organization of African Unity, Convention Governing the Specific Aspects of Refugee Problems in Africa,” http://www.unhcr.org/en-us/about-us/background/45dc1a682/oau-convention-governing-specific-aspects-refugee-problems-africa-adopted.html accessed September 15, 2017. [iii] The Cartagena Declaration stated that “in addition to containing elements of the 1951 Convention…[the definition] includes among refugees, persons who have fled their country because their lives, safety or freedom have been threatened by generalized violence, foreign aggression, internal conflicts, massive violations of human rights or other circumstances which have seriously disturbed the public order.” Cartagena Declaration on Refugees,” http://www.unhcr.org/en-us/about-us/background/45dc19084/cartagena-declaration-refugees-adopted-colloquium-international-protection.html [iv] European Union, “Council Directive 2004/83/EC,” April 29, 2004, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32004L0083 accessed March 20, 2018. [v] Refugee Relief Act of 1953 (P.L. 83-203), https://www.law.cornell.edu/topn/refugee_relief_act_of_1953. [vi] Immigration and Nationality Act of 1965 (P.L. 89-236), https://www.govinfo.gov/content/pkg/STATUTE-79/pdf/STATUTE-79-Pg911.pdf [vii] Immigration Act of 1990 (P.L.101-649), https://www.congress.gov/bill/101st-congress/senate-bill/358 [viii] USCIS, “Deferred Enforced Departure,” https://www.uscis.gov/humanitarian/temporary-protected-status/deferred-enforced-departure. [ix] The humanitarian parole authority was first recognized in the 1952 Immigration Act (more popularly known as the McCarran Walter Act). See http://library.uwb.edu/Static/USimmigration/1952_immigration_and_nationality_act.html. See also “§ Sec. 212.5 Parole of aliens into the United States,” https://www.uscis.gov/ilink/docView/SLB/HTML/SLB/0-0-0-1/0-0-0-11261/0-0-0-15905/0-0-0-16404.html Prof. Sean Kheraj, York University. This is the fifth post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, and ActiveHistory.ca. If nuclear power is to be used as a stop-gap or transitional technology for the de-carbonization of industrial economies, what comes next? Energy history could offer new ways of imagining different energy futures. Current scholarship, unfortunately, mostly offers linear narratives of growth toward the development of high-energy economies, leaving little room to imagine low-energy futures. As a result, energy historians have rarely presented plausible ideas for low-energy futures and instead dwell on apocalyptic visions of poverty and the loss of precious, ill-defined “standards of living.” The fossil fuel-based energy systems that wealthy, industrialized nation states developed in the nineteenth and twentieth centuries now threaten the habitability of the Earth for all people. Global warming lies at the heart of the debate over future energy transitions. While Nancy Langston makes a strong case for thinking about the use of nuclear power as a tool for addressing the immediate emergency of carbon pollution of the atmosphere, her arguments left me wondering what energy futures will look like after de-carbonization. Will industrialized economies continue with unconstrained growth in energy consumption, expand reliance on nuclear power, and press forward with new technological innovations to consume even more energy (Thorium reactors? Fusion reactors? Dilithium crystals?)? Or will profligate energy consumers finally lift their heads up from an empty trough and start to think about ways of living with less energy? Unfortunately, energy history has not been helpful in imagining low-energy possibilities. For the past couple of years, I’ve been getting familiar with the field of energy history and, for the most part, it has been the story of more. [1] Energy history is a related field to environmental history, but also incorporates economic history, the history of capitalism, social history, cultural history and gender history (and probably more than that). My particular interest is in the history of hydrocarbons, but I’ve tried to take a wide view of the field and consider scholarship that examines energy history in deeper historical contexts. There are several scholars who have written such books that consider the history of human energy use in deep time. For example, in 1982, Rolf Peter Sieferle started his long view of energy history in The Subterranean Forest: Energy Systems and the Industrial Revolution by considering Paleolithic societies. Alfred Crosby’s Children of the Sun: A History of Humanity’s Unappeasable Appetite for Energy (2006) begins its survey of human energy history with the advent of anthropogenic fire and its use in cooking. Vaclav Smil goes back to so-called “pre-history” at the start of Energy and Civilization: A History (2017) to consider the origins of crop cultivation. In each of these surveys energy historians track the general trend of growing energy use. While they show some dips in consumption and global regional variation, the story they tell is precisely as Crosby puts it in his subtitle, a tale of humanity’s unappeasable appetite for greater and greater quantities of energy. The narrative of energy history in the scholarship is remarkably linear, verging on Malthusian. According to Smil: “Civilization’s advances can be seen as a quest for higher energy use required to produce increased food harvests, to mobilize a greater output and variety of materials, to produce more, and more diverse, goods, to enable higher mobility, and to create access to a virtually unlimited amount of information. These accomplishments have resulted in larger populations organized with greater social complexity into nation-states and supranational collectives, and enjoying a higher quality of life.” [2] Indeed, from a statistical point of view, it’s difficult not to reach the conclusion that humans have proceeded inexorably from one technological innovation to another, finding more ways of wrenching power from the Sun and Earth. The only interruptions along humanity’s path to high-energy civilization were war, famine, economic crisis, and environmental collapse. Canada’s relatively short energy history appears to tell a similar story. As Richard W. Unger wrote in The Otter~la loutre recently, “Canadians are among the greatest consumers of energy per person in the world.” And the history of energy consumption in Canada since Confederation shows steady growth and sudden acceleration with the advent of mass hydrocarbon consumption between the 1950s and 1970s. Steve Penfold’s analysis of Canadian liquid petroleum use focuses on this period of extraordinary, nearly uninterrupted growth in energy consumption. Only in 1979 did Canadian petroleum consumption momentarily dip in response to an economic recession. “What could have been an energy reckoning…” Penfold writes, “ultimately confirmed the long history of rising demand.” [2] I’ve seen much of what Penfold finds in my own research on the history of oil pipeline development in Canada. Take, for instance, the Interprovincial pipeline system, Canada’s largest oil delivery system. For much of Canada’s “Great Acceleration” the history of more couldn’t be clearer: This view of energy history as the history of more informs some of the conclusions (and predictions) of energy historians. Crosby is, perhaps, the most optimistic about the potential of technological innovation to resolve what he describes as humanity’s unsustainable use of fossil fuels. In Crosby’s view, “the nuclear reactor waits at our elbow like a superb butler.” [4] For the most part, he is dismissive of energy conservation or radical reductions in energy consumption as alternatives to modern energy systems, which he admits are “new, abnormal, and unsustainable.” [5] Instead, he foresees yet another technological revolution as the pathway forward, carrying on with humanity’s seemingly endless growth in energy use. Energy historians, much like historians of the Anthropocene, have a habit of generalizing humanity in their analysis of environmental change. As I wrote last year in The Otter~la loutre, “To understand the history of Canada’s Anthropocene, we must be able to explain who exactly constitutes the “anthropos.”” Energy historians might consider doing the same. The history of human energy use appears to be a story of more when human energy use is considered in an undifferentiated manner. The pace of energy consumption in Canada, for instance, might look different when considering the rich and the poor, settlers and Indigenous people, rural Canadians and urban Canadians. Globally, energy histories around the world tell different stories beyond the history of more including histories of low-energy societies and histories of energy decline. Most global energy histories focus on industrialized societies and say little about developing nations and the persistence of low-energy, subsistence economies. If Smil is correct and “Indeed, higher energy use by itself does not guarantee anything except greater environmental burdens,” then future decisions about energy use should probably consider lower energy options. [6] Transitioning away from burning fossil fuels by using nuclear power may alleviate the immediate existential crisis of global warming, but confronting the environmental implications of high-energy societies may be the bigger challenge. To address that challenge, we may need to look back at histories of less. Sean Kheraj is the director of the Network in Canadian History and Environment. He’s an associate professor in the Department of History at York University. His research and teaching focuses on environmental and Canadian history. He is also the host and producer of Nature’s Past, NiCHE’s audio podcast series and he blogs at http://seankheraj.com. [1] I’m borrowing from Steve Penfold’s pointed summary of the history of gasoline consumption in Canada: “Indeed, at one level of approximation, you could reduce the entire his-tory of Canadian gasoline to a single keyword: more.” See Steve Penfold, “Petroleum Liquids” in Powering Up Canada: A History of Power, Fuel, and Energy from 1600 ed. R. W. Sandwell (Montreal: McGill-Queen’s University Press, 2016), 277.
[2] Vaclav Smil, Energy and Civilization: A History (Cambridge: MIT Press, 2017), 385. [3] Penfold, “Petroleum Liquids,” 278. [4] Alfred W. Crosby, Children of the Sun: A History of Humanity’s Unappeasable Appetite for Energy (New York: W.W. Norton, 2006), 126. [5] Ibid, 164. [6] Smil, Energy and Civilization, 439. Prof. Toshihiro Higuchi, Georgetown University. This is the fifth post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, and ActiveHistory.ca. Nuclear power is back, riding on the growing fears of catastrophic climate change that lurks around the corner. The looming climate crisis has rekindled heated debate over the advantages and disadvantages of nuclear power. However, advocates and opponents alike tend to overlook or downplay a unique risk that sets atomic energy apart from all other energy sources: proliferation of nuclear weapons. Despite the lasting tragedy of the 2011 Fukushima disaster, the elusive goal of nuclear safety, and the stalled progress in radioactive waste disposal, nuclear power has once again captivated the world as a low-carbon energy solution. According to the latest IPCC report, released in October 2018, most of the 89 available pathways to limiting warming to 1.5 oC above pre-industrial levels see a larger role for nuclear power in the future. The median values in global nuclear electricity generation across these scenarios increase from 10.84 to 22.64 exajoule by 2050. The global nuclear industry, after many setbacks in selling its products, has jumped on the renewed interest of the climate policy community in atomic energy. The World Nuclear Association has recently launched an initiative called the Harmony Programme, which has established an ambitious goal of 25% of global electricity supplied by nuclear in 2050. Even some critics agree that nuclear power should be part of a future clean energy mix. The Union of Concerned Scientists, a U.S.-based science advocacy group and proponent of stronger nuclear regulations, recently published an op-ed urging the United States to “[k]eep safely operating nuclear plants running until they can be replaced by other low-carbon technologies.” But the justified focus on energy production vis-à-vis climate change obscures the debate that until recently had defined the nuclear issue: weapons proliferation. It is often said that a global nuclear regulatory regime, grounded on the 1968 Nuclear Non-Proliferation Treaty (NPT) and the International Atomic Energy Agency (IAEA)’s safeguards system, has proven successful as a check against the diversion of fissile materials from peaceful to military uses. There is indeed a good reason for this optimism. Since 1968, only three countries (India, Pakistan, North Korea) have publicly declared possession of nuclear weapons – a far cry from “15 or 20 or 25 nations” that President Kennedy famously predicted would go nuclear by the 1970s. Contrary to the impressions created by the biological metaphor, as political scientist Benoit Pelopidas has pointed out, the “proliferation” of nuclear weapons is also neither inevitable nor irreversible. South Africa, a non-NPT country which had secretly developed nuclear weapons by the 1980s, voluntarily dismantled its arsenal following the end of Apartheid. Belarus, Kazakhstan, and Ukraine, which inherited nuclear warheads following the collapse of the Soviet Union in 1991, also agreed to transfer them to Russia. Moreover, despite all the talk about the threat of nuclear terrorism, experts note a multitude of obstacles, both technical and political, for non-state actors to steal or assemble workable atomic devices.[1] Although many countries and terrorists are known to have harbored nuclear ambition at one point or another in the past – and some undoubtedly still do so today – we should not exaggerate the possibility of nuclear weapons acquisition by new countries and violent non-state actors and the potential threat that it might pose to international security. The real dangers of nuclear proliferation, however, lie elsewhere. The NPT is supposed to be a bargain between the nuclear haves and have-nots. The non-nuclear countries agreed not to acquire or manufacture nuclear weapons in exchange for the pledge made by all parties, including the five nuclear-weapon states designated by the NPT (United States, Soviet Union/Russia, United Kingdom, France, and China), to “pursue negotiations in good faith on effective measures relating to cessation of the nuclear arms race at an early date and to nuclear disarmament.” The nuclear-armed countries, however, have consistently failed to keep their end of the deal.[2] Meanwhile, the United States has repeatedly used or threatened to use military force to disarm hostile countries suspected to have clandestine nuclear weapons programs. Iraq is the most famous example of this, but as discussed below, U.S. officials also seriously considered preemptive attacks against nuclear facilities in China and North Korea. The United States is not alone in its penchant for unilateral military action. Israel, a U.S. ally widely believed to possess nuclear weapons, has also carried out a number of surprise airstrikes that destroyed an Iraqi nuclear reactor in 1981 and a suspected Syrian installation in 2007.[3] The alleged “success” of the repeated use of force and its threat to stem the tide of nuclear proliferation, however, comes at a high cost. Such action may not only deepen the insecurity of the threatened nation and make it all the more determined to develop its nuclear capabilities as a deterrent, but also entails a serious risk of unintended escalation to a large-scale conflict. Anyone who tries to weigh the value of nuclear power in coping with the climate crisis thus must take stock of the history of militarized counter-proliferation policy that reflects and reinforces what historian Shane J. Maddock has called “nuclear apartheid,” a hierarchy of nations grounded on power inequality between the nuclear haves and have-nots.[4] In October 1964, the People’s Republic of China successfully tested an atomic bomb, becoming the fifth country that demonstrated its nuclear weapons capabilities. The United States eventually acquiesced China’s nuclear status by the time it signed the NPT in 1968, which formally defined the nuclear-weapon state as a country that had manufactured and detonated a nuclear device prior to January 1, 1967. Washington’s decision to tolerate a nuclear China, however, did not come without resistance. In fact, as historian Francis J. Gavin has noted, there is a striking parallelism between the U.S. perception of China during the 1960s and that of a “rogue state” today: China had already clashed with the United States during the Korean War, twice shelled the outlying islands of Taiwan, and invaded India over a disputed border; it strongly disputed the Soviet Union’s leadership in the Communist world and aggressively supported revolutionary forces around the world; and it consolidated a one-party rule and embarked on a series of disastrous political, economic, and social campaigns, most notably the Great Leap Forward and the Cultural Revolution.[5] Operating from the Cold War mindset, and with little information shedding light on the complexity of China’s foreign and domestic policies, senior U.S. officials feared that China’s nuclear weapons program would post a serious threat to the stability of East Asia and the international effort to prevent the further spread of nuclear weapons around the world.[6] It is important to note that not all U.S. officials held such a grim view about China’s nuclear ambition and its consequences. Some believed that a nuclear-armed China would act rather cautiously, and President John F. Kennedy and Lyndon B. Johnson both tried to induce China by diplomatic means to abandon its nuclear program. As historians William Burr and Jeffrey T. Richelson have demonstrated, however, the Kennedy and Johnson administrations also developed contingency plans to disarm China by force. In a memo written in April 1963, the Joint Chiefs of Staff discussed a variety of military options, ranging from covert operations to the use of a tactical nuclear weapon, to coerce China into signing a test ban treaty.[7] While the military was skeptical about the effectiveness of unilateral action and also cautious about the risk of retaliation and escalation, Kennedy and some of his senior advisers remained keen on military and covert operations. For instance, the President showed his interest in enlisting the Republic of China in Taiwan as a proxy to launch a commando raid against Chinese nuclear installations.[8] William Foster, director of the Arms Control and Disarmament Agency, later recalled that Kennedy had been eager to consider the possibility of an airstrike in coordination with, or with tacit approval of, the Soviet Union.[9] The idea of an air raid resurfaced in September 1964, on the eve of the Chinese test. Although Johnson and his advisers ultimately decided against the proposal, all agreed that, “in case of military hostilities,” the United States should consider “the possibility of an appropriate military action against Chinese nuclear facilities.”[10] Despite all the talks about the use of force, the U.S. government ultimately refrained from taking such drastic action. The Soviet Union refused to discuss the possibility of joint military intervention, and the political costs and military risks of an unprovoked attack were too high. The failure to stop China’s nuclear weapons program, Gavin has pointed out, precipitated a major shift in U.S. nuclear policy toward creating a global nonproliferation regime with the NPT as its keystone.[11] However, these “proliferation lessons from the 1960s,” as Gavin has called, did not change the fundamental fact that the United States was willing to contemplate military action, to be carried out unilaterally if necessary, to prevent hostile countries from acquiring nuclear weapons. The NPT became a handy justification for such measures. This was abundantly clear when North Korea triggered another nuclear crisis thirty years later. In March 1993, North Korea startled the world by announcing its decision to withdraw from the NPT. At issue was the IAEA’s demand for special inspections at nuclear facilities in Yongbyon to account for the amount of plutonium generated in an earlier uninspected refueling operation. The tension briefly subsided when, after bilateral talks with the United States, Pyongyang suspended the process of pulling out of the NPT and agreed to allow inspections at a number of installations. In March 1994, however, North Korea suddenly reversed its attitude, blocking IAEA inspectors from conducting activities necessary to complete their mission. The United States responded by declaring its intention to ask the United Nations Security Council to impose economic sanctions against North Korea. As the confrontation between the United States and North Korea escalated, President Bill Clinton decided to take all necessary measures to coerce Pyongyang into full compliance with the IAEA safeguards. In his memoirs, Clinton wrote that “I was determined to prevent North Korea from developing a nuclear arsenal, even at the risk of war.”[12] To leave no room for misunderstanding about his resolve, Clinton let his senior advisers and military commanders openly discuss contingency plans for military action. On February 6, The New York Times broke news on updated U.S. defense plans for South Korea in the event of a North Korean attack, describing a newly added option for a counteroffensive to seize Pyongyang and overthrow the regime of Kim Il Sung.[13] Meanwhile, Secretary of Defense William Perry talked tough, telling the press that “we would not rule out a preemptive military strike.”[14] The talk about the use of force against North Korea was not an idle threat. In his memoirs, Perry has described contingency planning for military action. In May 1994, when North Korea began to remove the spent fuel rods containing plutonium from its reactor, the Defense Secretary ordered John Shalikhashvili (chairman of the Joint Chiefs of Staff) and Gary Luck (commander of the U.S. military forces in South Korea) to prepare a course of action for “a ‘surgical’ strike by cruise missiles on the reprocessing facility at Yongbyon.”[15] Three former U.S. officials, Joel S. Wit, Daniel B. Poneman, and Richard L. Gallucci, also confirmed that the strike plan was discussed at the highest level of the Clinton administration. On May 19, Perry, Shalikhashvili, and Luck briefed Clinton and his aides on the proposal for an air raid against the Yongbyon facilities, asserting that it would “set the North Korean nuclear program back by years.” Perry, however, reportedly stressed the “downside risk,” namely that “this action would certainly spark a violent reaction, perhaps even a general war.”[16] Clinton recalled that a “sobering estimate of the staggering losses both sides would suffer if war broke out” gave him pause.[17] As Perry has noted, the military option was still “‘on the table’, but very far back on the table.”[18] The self-restraint of the Clinton administration and its commitment to a diplomatic solution have earned praise from many scholars and pundits – in sharp contrast to George W. Bush’s aggressive unilateralism. But the fact remains that Clinton and his aides considered the threat of preventive military action as permissible, even essential, to pressure North Korea into refraining from any suspicious nuclear activities. And their willingness to go to the brink of actual conflict created the tense policy environment that greatly diminished room for quiet diplomacy for a possible compromise while drastically raising the risk of accidents and miscalculations. In this sense, the “peaceful” conclusion of the first North Korean nuclear crisis was a Pyrrhic one, reinforcing the belief widely held by the U.S. policy community that the United States must be prepared to use its military force unilaterally to uphold the global non-proliferation regime. It is thus no coincidence that, even after the disastrous outcomes of the Iraqi War fought in the name of nuclear nonproliferation, the U.S. government still continues to wage the dangerous game of brinkmanship with hostile powers suspected of pursuing the clandestine development of nuclear weapons. What, then, does the history of U.S. counter-proliferation policy mean for the future use of nuclear power to combat climate change? An answer, I believe, lies in an accelerating shift in the nuclear geography. The New Policies Scenario of the International Energy Agency’s World Energy Outlook 2018, a global energy trend forecast based on policies and targets announced by governments, shows that the demand for nuclear power in 2017-40 will decrease in advanced economies by 60 Mtoe (millions of tons of oil equivalent), whereas it will increase in developing economies by 344 Mtoe. Of approximately 30 countries which are currently considering, planning, or starting nuclear power programs, many are post-colonial and post-socialist countries located in areas, including Central Asia, Eastern Europe, the Middle East, and South and Southeast Asia, where the United States is competing with other major and regional powers for greater influence. Added to this geopolitical layer is the nuclear supply game. While many Western conglomerates have recently decided to exit from nuclear exports due to swelling construction costs, Russian and Chinese state-owned companies have aggressively sold nuclear power plants to emerging countries, a move backed by their governments as part of their global strategies. Although Russia and China have generally cooperated with the United States in controlling nuclear exports, the recent U.S. withdrawal from the Iran nuclear deal has pitted Washington against Moscow and Beijing over the latter’s continued negotiations with Iran for nuclear cooperation. Given the growing tension between the United States on the one hand and Russia and China on the other, the expansion of civilian nuclear programs in key strategic regions is likely to be fraught with serious risks of an international crisis and even an armed conflict. The fundamental solution to the nuclear dilemma in the changing climate is simple: carry out the pledge made by all parties to the NPT, that is, to “pursue negotiations in good faith on effective measures relating to cessation of the nuclear arms at an early date and to nuclear disarmament, and on a treaty on general and complete disarmament under strict and effective international control” (Article VI). A breakthrough toward this goal came in July 2017 when the United Nations General Assembly voted to adopt the first legally binding international agreement that prohibited nuclear weapons. All of the nuclear weapon states and most of their allies, however, refused to participate in the treaty negotiations. Recently, Christopher Ashley Ford, assistant secretary of state for nonproliferation, has called the treaty as a “well-intended mistake,” insisting that a “better way” was to work within the NPT framework while taking steps to improve “the actual geopolitical conditions that countries face in the world.” If the IPCC is correct in its claim that we have only a little more than a decade to stop potentially catastrophic climate change, it is unlikely that the “pragmatic, conditions-focused program” described by Ford will significantly reduce risks of nuclear proliferation and militarized counter-proliferation in time. If so, then we must realize that the promotion of civilian nuclear power in a world of nuclear apartheid – a world in which the United States and its allies are not hesitant to use force to disarm and topple a hostile regime with nuclear ambition – may have no less catastrophic consequences for human society than climate change. Toshihiro Higuchi an assistant professor in the Edmund A. Walsh School of Foreign Service at Georgetown University. He is a historian of U.S. foreign relations in the 19th and 20th century. His research interests rest with science and politics in managing the trans-border and global environment. [1] John Mueller, Atomic Obsession: Nuclear Alarmism from Hiroshima to Al-Qaeda (Oxford; New York: Oxford University Press, 2010), 161-179.
[2] Shane J. Maddock, Nuclear Apartheid: The Quest for American Atomic Supremacy from World War II to the Present (Chapel Hill: University of North Carolina Press, 2010), 1-2. [3] Dan Reiter, “Preventive Attacks against Nuclear Programs and the ‘Success’ at Osiraq,” Nonproliferation Review 12, no. 2 (2005): 355-371; Leonard S. Spector and Avner Cohen, “Israel’s Airstrike on Syria’s Reactor: Implications for the Nonproliferation Regime,” Arms Control Today 38, no. 6 (2008): 15-21. [4] Shane J. Maddock, Nuclear Apartheid: The Quest for American Atomic Supremacy from World War II to the Present (Chapel Hill: University of North Carolina Press, 2010), 1-2. [5] Francis J. Gavin, Nuclear Statecraft: History and Strategy in America’s Atomic Age (Ithaca, NY: Cornell University Press, 2012), 75-76. [6] William Burr and Jeffrey T. Richelson, “Whether to ‘Strangle the Baby in the Cradle’: The United States and the Chinese Nuclear Program, 1960-64,” International Security 25, no. 3 (2000/01): 55, 61-62. Also see Noam Kochavi, A Conflict Perpetuated: China Policy during the Kennedy Years (Westport, CT: Praeger, 2002). [7] Ibid., 68-69. [8] Ibid., 73. [9] Ibid., 54. [10] Document 49, Memorandum for the Record, September 15, 1964, in Foreign Relations of the United States, 1964-1968, vol. 30 (Washington: U.S.G.P.O., 1998). [11] Gavin, 75-103. [12] Bill Clinton, My Life (New York: Vintage, 2005), 591. [13] Michael R. Gordon, “North Korea’s Huge Military Spurs New Strategy in South,” New York Times, February 6, 1994, 1. [14] Clinton, My Life, 591. [15] William J. Perry, My Journey at the Nuclear Brink (Stanford, CA: Stanford University Press, 2015), 106. [16] Joel S. Wit, Daniel B. Poneman, and Robert L. Gallucci, Going Critical: The First North Korean Nuclear Crisis (Washington: Brookings Institution Press, 2003), 180. [17] Clinton, My Life, 603. [18] Perry, My Journey at the Nuclear Brink, 106. Dr. Robynne Mellor. This is the third post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, and ActiveHistory.ca. Shortly before uranium miner Gus Frobel died of lung cancer in 1978 he said, “This is reality. If we want energy, coal or uranium, lives will be lost. And I think society wants energy and they will find men willing to go into coal or uranium.”[1] Frobel understood that economists and governments had crunched the numbers. They had calculated how many miners died comparatively in coal and uranium production to produce a given amount of energy. They had rationally worked out that giving up Frobel’s life was worth it. I have come across these tables in archives. They lay out in columns the number of deaths to expect per megawatt year of energy produced. They weigh the ratios of deaths in uranium mines to those in coal mines. They coolly walk through their methodology in making these conclusions. These numbers will show you that fewer people died in uranium mines to produce a certain amount of energy. But the numbers do not include the pages and pages I have read of people remembering spouses, parents, siblings, children who died in their 30s, 40s, 50s, and so on. The numbers do not include details of these miners’ hobbies or snippets of their poetry; they don’t reveal the particulars of miners’ slow and painful wasting away. Miners are much easier to read about as death statistics. The erasure of these people trickles into debates about nuclear energy today. Any argument that highlights the dangers of coal mining but ignores entirely the plight of uranium miners is based on this reasoning. Rationalizations that say coal is more risky are based on the reduction of lives to ratios. If we are going to make these arguments, we must first acknowledge entirely what we are doing. We must be okay with what Gus Frobel said and meant: that someone is going to have to assume the risk of energy production and we are just choosing whom. We must realize that it is no accident that these Cold War calculations permeate our discourse today, and what that means moving forward. Promoters of nuclear energy have always tapped into fears about the environment in order to get us to stop worrying and learn to love the power plant. The awesome power of the atom announced itself to the world in a double flash of death and destruction when the United States dropped nuclear bombs on Hiroshima and Nagasaki in August 1945. Following the end of World War II, growing tensions between the United States and the Soviet Union and the consequent Cold War helped spur on a proliferation of nuclear weapons production. As nuclear technology became more important and sought after, governments around the world fought against nuclear energy’s devastating first impressions, which were difficult to dislodge from the minds of the public. From the earliest days, in order to combat the atom’s fearsome reputation and put a more positive spin on things, policymakers began pushing its potential peaceful applications. Nuclear technology and the environment were intertwined in many complex and mutually reinforcing ways. From as early as the 1940s, as historian Angela Creager has shown, the US Atomic Energy Commission used the potential ecological and biological application of radioisotopes as proof of the atom’s promising, non-militant prospects. By the 1950s, many hailed nuclear power as a way to escape resource constraint, underlining the comparatively small amount of uranium needed to produce the same amount of energy as coal. Using uranium was a way to conserve oil and coal for longer. In the 1960s, as the popular environmental movement grew, nuclear boosters appealed to the public’s concern for the planet by emphasizing the clean-burning qualities of nuclear energy. Environmentalism spread around the world, with environmental protection slowly being enshrined in law in several different countries. Environmental concern and protection also became an important part of the Cold War battle for hearts and minds. Nuclear advocates successfully appealed to environmentalist sentiments by avoiding certain problems, such as the intractable waste that the nuclear cycle produced, and emphasizing others, namely, the way it did not pollute the air. The main arguments of Cold War-era nuclear champions live on to this day. For many pro-nuclear environmentalists, who found these arguments appealing, the reasons to support nuclear energy were and continue to be: less uranium is needed than coal to produce the same amount of energy, nuclear energy is clean burning, radiation is “natural” and not something to be feared, and using nuclear energy will give us time to figure out different solutions to the energy crisis, which was once thought of as fossil fuel shortage and now leans more towards global warming. In broad strokes, then, these arguments are a Cold War holdover, and so are the anachronistic blind spots that accompany them. They portray nuclear power production as a single snapshot of a highly complex cycle. Nuclear is framed as “clean burning” for a reason; the period when it is burning is the only point when it can be considered clean. This reasoning made more sense when first promulgated because there was a hubris that accompanied nuclear technology, and part of this hubris was to assume that all of the issues that arose due to nuclear technology could and would be solved. Though that confidence is long-gone in general, it still lurks as an assumption that undergirds the argument for nuclear energy. One of the biggest problems that we were once sure we could solve is nuclear waste disposal. This problem has not been solved. It becomes more and more complex all the time, and the complexities tied up in the problem continue to multiply. Nuclear waste storage is still a stopgap measure, and most waste is still held on or near the surface in various locations, usually near where it is produced. The best long-term solution is a deep geological repository, but there are no such storage facilities for high-level radioactive waste yet. Several countries that have tried to build permanent repositories have faced both political and geological obstacles, such as the Yucca Mountain project in the United States, which the government defunded in 2012. Finland’s Onkalo repository is the most promising site. Many people who pay attention to these issues commend the Finnish government for successfully communicating with, and receiving consent from, the local community. But questions remain about why and how the people alive today can make decisions for people who will live on that land for the next 100,000 years. This timescale opens up various other questions about how to communicate risk through the millennia. Either way, we will not know if Onkalo is ultimately successful for a really long time, while the kitty litter accident at the Waste Isolation Pilot Plant in New Mexico, USA, where radioactive waste blew up in 2014, hints at how easily things can go wrong and defy careful models of risk. Promoters continue to use language that clouds this issue. Words such as “storage” and “disposal” obfuscate the inadequacies tied up in these so-called solutions. The truth is, disposal amounts to trying to keep waste from migrating by putting it somewhere and then trying to model the movements of the planet thousands of years into the future to make sure it stays where we put it. It is a catch-22. By ignoring the disposal problem, we kick the same can down the road that was kicked to us. By developing a disposal system, we just kick it really, really far into the future. Either way, there is an antiquated optimism that still persists in the belief that,one way or another, we will work it out, or have successfully planned for every contingency with our current solutions. Even if they do so inadequately, advocates of nuclear power often do acknowledge the back-end of the nuclear cycle. They usually only do so to dismiss it, but at least it is addressed. By contrast, they entirely ignore the front-end of the cycle. This tendency is particularly strange because when uranium is judged against fossil fuels, the ways that coal and oil are extracted enter the conversation while uranium, in contrast, is rarely considered in such terms. We think of coal and oil as things that come from the earth, uranium also is mined and its processing chain is just as complex as the other forms of fuel we seek to replace with it. Discussions of nuclear energy hardly ever mention uranium mining, possibly because uranium mining increasingly occurs in marginalized landscapes that are out of sight and out of mind (northern Saskatchewan in Canada and Kazakhstan are currently the biggest producers). But even for those who do pay attention to uranium mining, the problems associated with it are officially understood as something we have “figured out.” The prevailing narrative is that, yes, many uranium miners died from lung cancer linked to their work in uranium mines, and yes, there was a lot of waste produced and then inadequately disposed of due to the pressures and expediencies of the Cold War nuclear arms race. But when officials acknowledged these problems, they implemented regulations and fixed them. It follows that, because there is no longer a nuclear arms race, and because health and environmental authorities understand and accept the risks associated with mining activities, they have appropriately addressed and mitigated the problems linked to uranium production. Moreover, nuclear power generation, because it is separate from the arms race and the nefarious human radiation experiments that accompanied it, is safer and better for miners and communities that surround mines. Some aspects of this narrative are true. Uranium miners around the world did labor with few protections through at least the late 1960s, after which conditions improved moderately in some places. Several governments introduced and standardized maximum radon progeny (the decay products of uranium that cause cancer among miners) exposure levels. More mines had ventilation, monitoring increased, and many places banned miners from smoking underground. By the 1970s and 1980s, many countries considered the health problem solved. The issue with this portrayal is that the effectiveness of the introduction of these regulations is not very clear. Allowing a few years for the implementation of regulations, most countries did not have mines at regulated exposure levels until at least the mid-1970s. If we then allow for at least a fifteen-year latency period of lung cancer—which is the accepted minimum even with very high exposures—then lung cancer would not begin to show until, at the very least, around the late 1980s or early 1990s. By this period, however, the uranium-mining industry was collapsing. The Three Mile Island accident in 1979, the Chernobyl accident 1986, and the end of the Cold War arms race meant that plans for nuclear energy stalled and the demand for uranium plummeted. The uranium that did continue to be produced came from new mining regions and new cohorts of workers, or it affected people and places that the public and media ignored, or technology shifted and so fewer people faced the risks of underground uranium mining. There is little information about how and if the risks miners faced changed. There is also a dearth of information about how these post-regulation miners compare to their pre-regulation counterparts. One preliminary examinationof Canadian uranium miners, however, shows that miners who began work after 1970 had similar increased risk of mortality from lung cancer as those who began work in earlier decades. This suggests that there was either ineffective radon progeny reduction and erroneous reporting of radon progeny levels in mines or that there is something about the health risks in mines that are not quite understood. There is another relatively well-known narrative about uranium mining that some commenters point to as something we have figured out and corrected. Due to the extremely effective activism of the Navajo Nation, beginning in the 1970s and continuing through to the present, many people are aware of the hardships Navajo uranium miners faced and, to a lesser degree, the continued legacy of abandoned mines and tailings piles with which they have to contend. High-profile advocates for the Navajo, such as former secretary of the interiorStewart Udall and several journalistic and scholarly books on Navajos and uranium mining, have added to this awareness. Few people realize when pointing to the Navajo case that there is still a lot of confusion surrounding the long-term effects of uranium mining on Navajo land. It is an ongoing problem with unsatisfactory answers. Moreover, even though Navajo activists were adept at attracting attention to the problems they faced, many other uranium-mining communities cannot, do not want to, or have not been able to garner the same attention. Uranium mining happened and continues to happen around the world, even though the health risks are poorly understood. It is changing human bodies and landscapes to this day and affecting thousands of miners and communities. Those who work in mines are still making the trade-off between the employment the mine offers on the one hand, and the higher risk of lung cancer on the other. The environmental effects of uranium mining also are poorly understood and inadequately managed with a view to the long-term. When mines are in operation, the waste from uranium mills, called tailings, are usually stored in wet ponds or dry piles. Those who operate uranium mills try to keep these tailings from moving, and there are often government authorities that regulate these efforts, but tailings still seep into water, spread into soil, and migrate through food chains. These problems relate to mines and mills in operation, but there are also several problems that companies and governments face with regards to mines and mills that are no longer in operation. The production of uranium has led to landscapes with several abandoned mines that are neglected, as well as millions of tons of radioactive and toxic tailings. There are no good numbers for worldwide uranium tailings, but the International Atomic Energy Agency has estimatedthat the United States alone has produced 220 million tons of mill tailings and 220 million tons of uranium mine wastes. Waste from uranium production is managed in similar ways around the world. Using the same euphemistic language employed for nuclear waste coming out of the back-end of the nuclear cycle, tailings from uranium mills are often “disposed.” What disposal usually means is gathering tailings in one area, creating some kind of barrier to prevent erosion—this barrier can be vegetation, water, or rock—and then monitoring the tailings indefinitely to ensure they do not move. The question that follows is whether or not these tailings are harmful, and the truly unsatisfactory answer is that we do not know. Studies of communities surrounding uranium tailings that consider how tailings affect community health are scarce, while those that do exist are conflicting, inconclusive, and often problematic. While some studies, with a particular focus on cancer and death, argue that there are no increased illnesses linked to living in former uranium-mining areas, others have connected wastes from uranium production to various ailments, including kidney disease, hypertension, diabetes, and compromised immune system function. Now, half of all uranium production around the world uses in situ leaching or in situ recovery to extract uranium. Basically, uranium companies inject an oxidizing agent into an ore body, dissolve the uranium, and then pump the solution out and mill it without first having to mine it. The official line of thinking is that there are negligible environmental impacts stemming from this form of extraction. It certainly reduces risks for miners, but it is unlikely that it does not affect the environment. The environmentalist argument for nuclear energy, particularly the clean-burning component, is very appealing in a time when our biggest concern is climate change. Still, nuclear power is a band-aid technofix with many unknowns. The discussion surrounding nuclear energy has never fully grappled with the entire scope of the nuclear cycle, nor has it addressed the unique aspects of production of energy from metals that does not have parallels with fossil fuels. Making an argument about nuclear energy means examining all its risks in comparison with fossil fuels, and then coming to terms with the wealth of unknowns. It also means remembering and keeping in mind the bodies and landscapes making this option possible. To be a nuclear power advocate, especially as an environmentalist, one most also be an advocate for the safety of all nuclear workers. The problems uranium miners and uranium mining communities faced were never fully resolved and they are not fully understood. To promote nuclear power means to pay attention to the people and places that produce uranium and fighting to make sure they receive the protections they deserve for helping us carve our way out of this current problem. Robynne Mellor received her PhD in environmental history from Georgetown University, and she studies the intersection of the environment and the Cold War. Her research focuses on the environmental history of uranium mining in the United States, Canada, and the Soviet Union. She tweets at @RobynneMellor. [1] Gus Frobel, quoted in Lloyd Tataryn, Dying for a Living (Deneau and Greenberg Pubishers, 1979), 100.
Only Dramatic Reductions in Energy Use Will Save The World From Climate Catastrophe: A Prophecy2/27/2019
Prof. Andrew Watson, University of Saskatchewan. This is the third post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, and ActiveHistory.ca. There is no longer any debate. Humanity sits at the precipice of catastrophic climate change caused by anthropogenic greenhouse gas (GHG) emissions. Recent reports from the Intergovernmental Panel on Climate Change (IPCC)[1] and the U.S. Global Change Research Program (USGCRP)[2] provide clear assessments: to limit global warming to 1.5ºC above historic levels, thereby avoiding the most harmful consequences, governments, communities, and individuals around the world must take immediate steps to decarbonize their societies and economies. Change is coming regardless of how we proceed. Doing nothing guarantees large-scale resource conflicts, climate refugee migrations from the global south to the global north, and mass starvation. Dealing with the problem in the future will be exceedingly more difficult, not to mention expensive, than making important changes immediately. The only question is what changes are necessary to address the scale of the problem facing humanity? Do we pursue strategies that allow us to maintain our current standard of living, consuming comparable amounts of (zero-carbon) energy? Or do we accept fundamental changes to humanity’s relationship to energy? In his new book, The Wizard and the Prophet: Two Remarkable Scientists and Their Conflicting Visions of the Future of Our Planet, Charles C. Mann uses the life, work, and ideologies of Norman Borlaug (the Wizard) and William Vogt (the Prophet) to offer two typologies of twentieth century environmental science and thought. Borlaug represents the school of thought that believed technology could solve all of humanity’s environmental problems, which Mann refers to as “techno-optimism.” Vogt, by contrast, represents a fundamentally different attitude that saw only a drastic reduction in consumption as the key to solving environmental problems, which Mann (borrowing from demographer Betsy Hartmann) refers to as “apocalyptic environmentalism.”[3] In the industrialized countries of the world, the techno-optimist approach enjoys the greatest support. Amongst those who think “technology will save us,” decarbonizing the economy means replacing fossil fuel energy with “clean” energy (i.e. energy that does not emit GHGs). Hydropower has nearly reached it global potential, and simply cannot replace fossil fuel energy. Solar, wind, and to some extent geothermal, are rapidly growing technological options for replacing fossil fuel energy. And as this series reveals, some debate exists over whether nuclear can ever play a meaningful role in a twenty-first century energy transition. The quest for new clean energy pathways aims to rid the developed world of the blame for causing climate change without the need to fundamentally change the way of life responsible for climate change. In short, those advocating for clean energy hope to cleanse their moral culpability as much as the planet’s atmosphere. This is the crux of the climate change crisis and the challenge of how to respond to it. It is not a technical problem. It is a moral and ethical problem – the biggest the world has ever faced. The USGCRP’s Fourth National Climate Assessment warns that the risks from climate change “are often highest for those that are already vulnerable, including low-income communities, some communities of color, children, and the elderly.”[4] Similarly, the IPCC’s Global Warming of 1.5ºC report insists that “the worst impacts tend to fall on those least responsible for the problem, within states, between states, and between generations.”[5] Furthermore, the USGCRP points out, “Marginalized populations may also be affected disproportionately by actions to address the underlying causes and impacts of climate change, if they are not implemented under policies that consider existing inequalities.” Indeed, the IPCC reports, “the worst-affected states, groups and individuals are not always well-represented” in the process of developing climate change strategies. The climate crisis has always been about the vulnerabilities created by energy inequalities. Decarbonizing the industrialized and industrializing parts of the world has the potential to avoid making things any worse for the most marginalized segments of the global population, but it wouldn’t necessarily make anything better for them either. At the same time, decarbonization strategies imagine an energy future in which people, communities, and countries with a high standard of living are under no obligation to make any significant sacrifices to their large energy footprints. Over the last thirty years, industrialized countries, such as Germany, the United States, and Canada have consistently consumed considerably more energy per capita than non-industrialized or industrializing countries (Figure 1). In 2016, industrialized countries in North America and Western Europe consumed three to four times as much energy per capita as the global average, while non-industrialized countries consumed considerably less than the average. Most of the research that has modelled 1.5ºC-consistent energy pathways for the twenty-first century assume that decarbonisation means continuing to use the same amount of, or only slightly less, energy (Figure 2).[6] Most of these models project that solar and wind energy will comprise a major share of the energy budget by 2050 (nuclear, it should be noted, will not). Curiously, the models also project a major role for biofuels as well. Most alarmingly, however, most models assume major use of carbon capture and storage technology, both to divert emissions from biofuels and to actively pull carbon out of the atmosphere (known as carbon dioxide reduction, or negative emissions). The important point here, however, is not the technological composition of these energy pathways, but the continuity of energy consumption over the course of the twenty-first century. In case it is not already clear, I do not think technology will save us. Solar and wind energy technology has the potential to provide an abundance of energy, but it won’t be enough to replace the amount of fossil fuel energy we currently consume, and it certainly won’t happen quickly enough to avoid warming greater than 1.5ºC. Biofuels entail a land cost that in many cases involves competition with agriculture and places potentially unbearable pressure on fresh water resources. Carbon capture and storage assumes that pumping enormous amounts of carbon underground won’t have unintended and unacceptable consequences. Nuclear energy might provide a share of the global energy budget, but according to many models, it will always be a relatively small share. Techno-optimism is a desperate hope that the problem can be solved without fundamental changes to high-energy standards of living. The current 1.5ºC-consistent energy pathways include no meaningful changes in the amount of overall energy consumed in industrialized and industrializing countries. The studies that do incorporate “lifestyle changes” into their models feature efficiencies, such as taking shorter showers, adjusting indoor air temperature, or reducing usage of luxury appliances (e.g. clothes dryers); none of which present a fundamental challenge to a western standard of living.[7] Decarbonization models that replace fossil fuel energy with clean energy reflect a desire to avoid addressing the role of energy inequities in the climate change crisis. Climate change is a problem of global inequality, not just carbon emissions. Those of us living in the developed and developing countries of the world would like to pretend that the problem can be solved with technology, and that we would not then need to change our lives all that much. In a decarbonized society, the wizards tell us, our economy could continue to operate with clean energy. But it can’t. Any ideas to the contrary are simply excuses for perpetuating a world of incredible energy inequality. We need to heed the prophets and use dramatically less energy. We need to accept extreme changes to our economy, our standard of living, and our culture. Andrew Watson is an assistant professor of environmental history at the University of Saskatchewan. [1] IPCC, 2018: Global warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty [V. Masson-Delmotte, P. Zhai, H. O. Pörtner, D. Roberts, J. Skea, P.R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J. B. R. Matthews, Y. Chen, X. Zhou, M. I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, T. Waterfield (eds.)]. In Press.
[2] USGCRP, 2018: Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment, Volume II[Reidmiller, D.R., C.W. Avery, D.R. Easterling, K.E. Kunkel, K.L.M. Lewis, T.K. Maycock, and B.C. Stewart (eds.)]. U.S. Global Change Research Program, Washington, DC, USA. doi: 10.7930/NCA4.2018. [3] Charles C. Mann, The Wizard and the Prophet: Two Remarkable Scientists and Their Conflicting Visions of the Future of Our Planet (Picador, 2018), 5-6. [4] USGCRP, Fourth National Climate Assessment, Volume II, Chapter 1: Overview. [5] IPCC, Global warming of 1.5°C, Chapter 1. [6] IPCC, Global warming of 1.5°C; Detlef P. van Vuuren, et al., “Alternative pathways to the 1..5°C target reduce the need for negative emission technologies,” Nature Climate Change, Vol.8 (May 2018): 391-397; Joeri Rogelj, et al., “Scenarios towards limiting global mean temperature increase below 1.5°C,” Nature Climate Change, Vol.8 (April 2018): 325-332. [7] Mariësse A.E. van Sluisveld, et al., “Exploring the implications of lifestyle change in 2°C mitigation scenarios using the IMAGE integrated assessment model,” Technological Forecasting and Social Change, Vol.102 (2016): 309-319. Prof. Dagomar Degroot, Georgetown University. Roughly 11,000 years ago, rising sea levels submerged Beringia, the vast land bridge that once connected the Old and New Worlds. Vikings and perhaps Polynesians briefly established a foothold in the Americas, but it was the voyage of Columbus in 1492 that firmly restored the ancient link between the world’s hemispheres. Plants, animals, and pathogens – the microscopic agents of disease – never before seen in the Americas now arrived in the very heart of the western hemisphere. It is commonly said that few organisms spread more quickly, or with more horrific consequences, than the microbes responsible for measles and smallpox. Since the original inhabitants of the Americas had never encountered them before, millions died. The great environmental historian Alfred Crosby first popularized these ideas in 1972. It took over thirty years before a climatologist, William Ruddiman, added a disturbing new wrinkle. What if so many people died so quickly across the Americas that it changed Earth’s climate? Abandoned fields and woodlands, once carefully cultivated, must have been overrun by wild plants that would have drawn huge amounts of carbon dioxide out of the atmosphere. Perhaps that was the cause of a sixteenth-century drop in atmospheric carbon dioxide, which scientists had earlier uncovered by sampling ancient bubbles in polar ice sheets. By weakening the greenhouse effect, the drop might have exacerbated cooling already underway during the “Grindelwald Fluctuation:” an especially frigid stretch of a much older cold period called the “Little Ice Age." Last month, an extraordinary article by a team of scholars from the University College London captured international headlines by uncovering new evidence for these apparent relationships. The authors calculate that nearly 56 million hectares previously used for food production must have been abandoned in just the century after 1492, when they estimate that epidemics killed 90% of the roughly 60 million people indigenous to the Americas. They conclude that roughly half of the simultaneous dip in atmospheric carbon dioxide cannot be accounted for unless wild plants grew rapidly across these vast territories. On social media, the article went viral at a time when the Trump Administration’s wanton disregard for the lives of Latin American refugees seems matched only by its contempt for climate science. For many, the links between colonial violence and climate change never appeared clearer – or more firmly rooted in the history of white supremacy. Some may wonder whether it is wise to quibble with science that offers urgently-needed perspectives on very real, and very alarming, relationships in our present. Yet bold claims naturally invite questions and criticism, and so it is with this new article. Historians – who were not among the co-authors – may point out that the article relies on dated scholarship to calculate the size of pre-contact populations in the Americas, and the causes for their decline. Newer work has in fact found little evidence for pan-American pandemics before the seventeenth century. More importantly, the article’s headline-grabbing conclusions depend on a chain of speculative relationships, each with enough uncertainties to call the entire chain into question. For example, some cores exhumed from Antarctic ice sheets appear to reveal a gradual decline in atmospheric carbon dioxide during the sixteenth century, while others apparently show an abrupt fall around 1590. Part of the reason may have to do with local atmospheric variations. Yet the difference cannot be dismissed, since it is hard to imagine how gradual depopulation could have led to an abrupt fall in 1590. To take another example, the article leans on computer models and datasets that estimate the historical expansion of cropland and pasture. Models cited in the article suggest that the area under human cultivation steadily increased from 1500 until 1700: precisely the period when its decline supposedly cooled the Earth. An increase would make sense, considering that the world’s human population likely rose by as many as 100 million people over the course of the sixteenth century. Meanwhile, merchants and governments across Eurasia depleted woodlands to power new industries and arm growing militaries. Changes in the extent and distribution of historical cropland, 3000 BCE to the present, according to the HYDE 3.1 database of human-induced global land use change. In any case, models and datasets may generate tidy numbers and figures, but they are by nature inexact tools for an era when few kept careful or reliable track of cultivated land. Models may differ enormously in their simulations of human land use; one, for example, shows 140 million more hectares of cropland than another for the year 1700. Remember that, according to the new article, the abandonment of just 56 million hectares in the Americas supposedly cooled the planet just a century earlier! If we can make educated guesses about land use changes across Asia or Europe, we know next to nothing about what might have happened in sixteenth-century Africa. Demographic changes across that vast and diverse continent may well have either amplified or diminished the climatic impact of depopulation in the Americas. And even in the Americas, we cannot easily model the relationship between human populations and land use. Surging populations of animals imported by Europeans, for example, may have chewed through enough plants to hold off advancing forests. Moreover, the early death toll in the Americas was often also especially high in communities at high elevations: where the tropical trees that absorb the most carbon could not go. In short, we cannot firmly establish that depopulation in the Americas cooled the Earth. For that reason, it is missing the point to think of the new article as either “wrong” or “right;” rather, we should view it as a particularly interesting contribution to an ongoing academic conversation. Journalists in particular should also avoid exaggerating the article’s conclusions. The co-authors never claim, for example, that depopulation “caused” the Little Ice Age, as some headlines announced, nor even the Grindelwald Fluctuation. At most, it worsened cooling already underway during that especially frigid stretch of the Little Ice Age. For all the enduring questions it provokes, the new article draws welcome attention to the enormity of what it calls the “Great Dying” that accompanied European colonization, which was really more of a “Great Killing” given the deliberate role that many colonizers played in the disaster. It also highlights the momentous environmental changes that accompanied the European conquest. The so-called “Age of Exploration” linked not only the Americas but many previously isolated lands to the Old World, in complex ways that nevertheless reshaped entire continents to look more like Europe. We are still reckoning with and contributing to the resulting, massive decline in plant and animal biomass and diversity. Not for nothing do some date the “Anthropocene,” the proposed geological epoch distinguished by human dominion over the natural world, to the sixteenth century. All of these issues also shed much-needed light on the Little Ice Age. Whatever its cause, we now know that climatic cooling had profound consequences for contemporary societies. Cooling and associated changes in atmospheric and oceanic circulation provoked harvest failures that all too often resulted in famines. In community after community, the malnourished repeatedly fell victim to outbreaks of epidemic disease, and mounting misery led many to take up arms against contemporary governments. Some communities and societies were resilient, even adaptive in the face of these calamities, but often partly by taking advantage of the less fortunate. Whether or not the New World genocide led to cooling, the sixteenth and seventeenth centuries offer plenty of warnings for our time. My thanks to Georgetown environmental historians John McNeill and Timothy Newfield for their help with this article, to paleoclimatologist Jürg Luterbacher for answering my questions about ice cores, and to colleagues who responded to my initial reflections on social media. Works Cited:
Archer, S. "Colonialism and Other Afflictions: Rethinking Native American Health History." History Compass 14 (2016): 511-21. Crosby, Alfred W. “Conquistador y pestilencia: the first New World pandemic and the fall of the great Indian empires.” The Hispanic American Historical Review 47:3 (1967): 321-337. Crosby, Alfred W. The Columbian Exchange: Biological and Cultural Consequences of 1492. Westport: Greenwood Press, 1972. Alfred W. Crosby, Ecological Imperialism: The Biological Expansion of Europe, 900-1900, 2nd Edition. Cambridge: Cambridge University Press, 2004. Degroot, Dagomar. “Climate Change and Society from the Fifteenth Through the Eighteenth Centuries.” WIREs Climate Change Advanced Review. DOI:10.1002/wcc.518 Degroot, Dagomar. The Frigid Golden Age: Climate Change, the Little Ice Age, and the Dutch Republic, 1560-1720. New York: Cambridge University Press, 2018. Gade, Daniel W. “Particularizing the Columbian exchange: Old World biota to Peru.” Journal of Historical Geography 48 (2015): 30. Goldewijk, Kees Klein, Arthur Beusen, Gerard Van Drecht, and Martine De Vos, “The HYDE 3.1 spatially explicit database of human‐induced global land‐use change over the past 12,000 years.” Global Ecology and Biogeography 20:1 (2011): 73-86. Jones, Emily Lena. “The ‘Columbian Exchange’ and landscapes of the Middle Rio Grande Valley, AD 1300– 1900.” The Holocene (2015): 1704. Kelton, Paul. "The Great Southeastern Smallpox Epidemic, 1696-1700: The Region's First Major Epidemic?". In R. Ethridge and C. Hudson, eds., The Transformation of Southeastern Indians, 1540-1760. Koch, Alexander, Chris Brierley, Mark M. Maslin, and Simon L. Lewis. “Earth system impacts of the European arrival and Great Dying in the Americas after 1492.” Quaternary Science Reviews 207 (2019): 13-36 McCook, Stuart. “The Neo-Columbian Exchange: The Second Conquest of the Greater Caribbean, 1720-1930.” Latin American Research Review 46: 4 (2011): 13. McNeill, J. R. “Woods and Warfare in World History.” Environmental History, 9:3 (2004): 388-410. Melville, Elinor G. K. A Plague of Sheep: Environmental Consequences of the Conquest of Mexico. Cambridge: Cambridge University Press, 1997. PAGES2k Consortium, “A global multiproxy database for temperature reconstructions of the Common Era.” Scientific Data 4 (2017). doi:10.1038/sdata.2017.88. Parker, Geoffrey. Global Crisis: War, Climate Change and Catastrophe in the Seventeenth Century. New Haven: Yale University Press, 2013. Sigl, Michael et al., "Timing and climate forcing of volcanic eruptions for the past 2,500 years." Nature 523:7562 (2015): 543. Riley, James C. "Smallpox and American Indians Revisited." Journal of the History of Medicine and Allied Sciences 65 (2010): 445-77. Ruddiman, William. “The Anthropogenic Greenhouse Era Began Thousands of Years Ago.” Climatic Change 61 (2003): 261–93. Ruddiman, William. Plows, Plagues, and Petroleum: How Humans Took Control of Climate. Princeton, NJ: Princeton University Press, 2005 Williams, Michael. Deforesting the Earth: From Prehistory to Global Crisis. Chicago: University of Chicago Press., 2002. Prof. Kate Brown, MIT This is the second post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, andActiveHistory.ca. Climate change is here to stay. So too for the next several millennia is radioactive fallout from nuclear accidents such as Chernobyl and Fukushima. Earthlings will also live with radioactive products from the production and testing of nuclear weapons. The question as to whether next generation technologies of nuclear power plants will be, as their promoters suggest, “perfectly safe” appears to decline in importance as we consider the catastrophic outcome of continued use of carbon-based fuels. Sea levels rising 10 feet, temperatures warming 3 degrees Celsius, tens of millions of climate refugees on the move. These predicted climate change catastrophes make nuclear accidents such as the 1986 Chernobyl accident look like a tiny blip in planetary time. Or maybe not. It is hard to compare an event in the past to one in the future that has not yet occurred. I have found researching for the past four years the medical and environmental history of the Chernobyl disaster that the health consequences were far greater than has been generally acknowledged. Rather than 35 to 54 fatalities recorded by UN agencies, the count in Ukraine alone (which received the least amount of radioactive fallout of the three affected Soviet republics) ranges between 35,000 and 150,000 fatalities from exposures to Chernobyl radioactivity. Instead of 200 people hospitalized after the accident, my tally from the de-classified archives is at least 40,000 people in the three most affected republics just in the summer months following the disaster. We don’t have to focus just on human health to worry about the future of humans on earth. Following biologists around the Chernobyl Zone the past few years, I learned that in the most contaminated territories of the Chernobyl Zone radioactivity has knocked out insects and microbes that are essential for the job of decomposition and pollination. Biologists Tim Mousseau and Anders Møller found radical decreases in pollinators in highly contaminated areas; the fruit flies, bees, butterflies and dragonflies were decimated by radioactivity in soils where they lay their eggs. They found that fewer pollinators meant less productive fruit trees. With less fruit, fruit-eating birds like thrushes and warblers suffered demographically and declined in number. With few frugivores, fewer fruit trees and shrubs took root and grew. The team investigated 19 villages in a 15-kilometer circle around the blown plant and found that just two apple trees had seed in two decades after the 1986 explosion.?1 The loss of insects, especially pollinators, we know, spells doom for humans on earth.?2 There are, apparently, many ways for our species to go extinct. Climate change is just one possibility. Since Chernobyl, fewer corporations have been interested in building and maintaining nuclear power plants. In the past few decades, the cycle of nuclear power—building, maintaining, disposing of waste, and liability—has proven economically unfeasible and is winding down. Faced with intractable problems, regulations on classifying and cleaning up waste are being watered down. Westinghouse, the last U.S builder of nuclear reactors, went bankrupt in 2017. It was bought out and struggles to complete orders for its AP1000 reactors. Now China and Russia are the main producers of reactors for civilian power. We don’t know much about China’s nuclear legacy. We know Russia’s safety record is dismal. Meanwhile, in most countries with nuclear reactors, an aging population of nuclear power operators, nuclear physicists, and radiation monitors is not being replaced by a younger generation. Probably the greatest obstacle to backing nuclear power as an alternative fuel is that we have run out of time. The long promised fusion reactors promoted with the billion-dollar might of the likes of Bill Gates and Jeff Bezos are still decades in the future. Roy Scranton estimates in Learning to Die that we would have to have on line 12,000 new conventional nuclear power reactors in order to replace petro-carbon fuels. It takes a decade or two to build a reactor. Conventional and fusion reactors would come on line at a time when the major coastal cities they would power are predicted to be underwater. In short, for a host of economic and infrastructure reasons, nuclear power as an alternative power source is not an option as a speedy and safe response to climate change. It makes more sense to take the billions invested into nuclear reactors and research and invest it in research for technologies that harvest energy from the wind, sun, thermal energy, biomass, tides and waves; solutions that depend on local conditions and local climates. Nuclear energy is seductive because it is a single fix-all to be plugged in anywhere by large entities, such as state ministries and corporations. This one-stop solution is the kind of modernist fix that got us into this mess in the first place. Instead, the far more plausible answer is multi-faceted, geographically-specific, and sensitive to micro-ecological conditions. It will involve not a few corporations led by billionaire visionaries, but a democratized energy grid organized by people in communities who have deep knowledge of historic and ecological conditions in their localities. As they work to power their community locally, they will see the value of conserving, saving, and living perhaps a little more quietly. Kate Brown is a Professor of Science, Technology and Society at MIT. She is the award-winning author of A Biography of No Place: From Ethnic Borderland to Soviet Heartland; Plutopia: Nuclear Families in Atomic Cities and the Great Soviet and American Plutonium Disasters; and Dispatches from Dystopia: Histories of Places Not Yet Forgotten. She is currently finishing a book, A Manual for Survival, on the environmental and medical consequences of the Chernobyl disaster, to be published by Norton in 2019.
1 Anders Pape Møller, Florian Barnier, Timothy A. Mousseau, “Ecosystems effects 25 years after Chernobyl: pollinators, fruit set and recruitment,” Oecologia(2012) 170:1155–1165. 2 Jarvis, Brooke, “The Insect Apocalypse Is Here,” The New York Times, November 27, 2018, sec. Magazine. https://www.nytimes.com/2018/11/27/magazine/insect-apocalypse.html. Prof. Nancy Langston, Michigan Tech This is the first post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, and ActiveHistory.ca. On March 28, 1979, I woke up late and rushed to catch the bus to my suburban high school in Rockville MD. So it wasn't until I found my friends clustered around the radio in the cafeteria that I learned seventy-seven miles upwind of us, Three Mile Island Reactor Unit 2 was in partial meltdown. Two months after the disaster, when the containment of its radioactivity was still in dispute, I was chosen as a finalist for a National Science Foundation (NSF)-sponsored competition to showcase emerging young scientists. The prize was a tour of Australia, where we were expected to promote the stellar safety record and wondrous technology of the U.S. nuclear program. The timing wasn't perfect, to put it mildly. At the finalists' interview, I ended up in a lively argument with the NSF judges when they told me that the public's nuclear anxieties were irrational, and I replied that NSF's certainties of safety were even more irrational, given the measurable risks of a meltdown and the failure of the U.S. to promote energy conservation as an alternative. To no one's surprise, I was not chosen to represent America in that summer's nuclear wonders tour. Instead, I marched against nuclear power. When the movie China Syndrome came out the following spring, all my worst suspicions about nuclear risks found fictional confirmation. Four decades later I now teach the problematic history of nuclear power. Students use the emerging field of discard studies to explore the structural context of a society that creates vast volumes of toxic waste, designating certain landscapes as sacrifice zones. We turn to Traci Voyles' insights in Wastelanding to understand the appalling history of uranium mining, exploring how the Dine (Navaho) were made into disposable peoples by the nuclear mining industry. [1] We watch a few of the "Duck and Cover" movies from 1950s to show how an enormous gap developed between potential nuclear hazards and possible individual responses. [2] When we examine the three major disasters in the history of nuclear energy—Three Mile Island, Chernobyl, and Fukushima—we use Diane Vaughan's concept of "the normalization of deviance" to explore the ways "disasters are socially organized and systematically produced by social structures” in high risk industries.[3] After glancing at the risks of nuclear proliferation and terrorism, we finally turn to the challenges of high level waste transport and storage. This is hardly an eco-modernist paean to the promise of nuclear power. I sound less like Robert Stone in his 2013 pro-nuclear documentary Pandora's Promise and much more like the younger Robert Stone in his 1988 documentary Radio Bikini, which focuses on the horrors of nuclear weapons testing and fallout. [4] By the end of the segments on nuclear, my students fully expect me to call for an end to nuclear power. But I do the opposite: I call for continuing, not shuttering, nuclear power plants. Why? Because the risks of climate change are overwhelmingly greater than the risks of all stages of the nuclear cycle combined. I am convinced that to have a chance of avoiding the existential threat of runaway climate change, we must keep the globe's clunky, aging, awkwardly designed 451 nuclear reactors limping along for the foreseeable future. Until renewables have replaced all existing fossil fuels, closing aging nuclear plants would mean game over for keeping warming to less than 2º C. [5] To paraphrase Winston Churchill's comments on democracy: existing forms of nuclear power are the worst form of non-renewable energy—except for all the other forms ever yet tried. To meet the objectives of the Paris Agreement, global CO₂ emissions need to decline rapidly as possible, reaching net-zero emissions sometime after 2050. We also need to remove CO₂ from the atmosphere at scale. The problem? We are accelerating in the wrong direction. A recent boom in coal and natural gas, and a recent shuttering of nuclear plants, means that while carbon emissions leveled off briefly in the mid 2010s, they are increasing again. [6] Yes, there's some good news in solar and wind, which are growing exponentially as prices drop. Energy prices from utility-scale solar plants have dropped 86% in the past decade, and new solar now costs $50/mWhr, less than half the cost of coal. [7] But renewables are not scaling up quickly enough for the globe to reach zero emissions by 2030. Remarkable as their growth has been, it has not offset the growth in coal, oil, and gas use over the same time—much less replaced existing fossil fuels. Microgrid and battery technologies may be advanced enough within several decades to replace 100% of our energy needs, but right now we need more than renewables in our zero-emissions energy portfolio to control climate change. When renewables have replaced all existing fossil fuels in power production, that's the time to consider closing existing nuclear plants. In the U.S. right now, nuclear plants are our largest source of zero-emissions power, “producing about 60 % of zero-emission electricity and approximately 20 % of total electricity.“ [8] Globally, if nuclear were shut down, we would emit an additional 2.5 billion metric tons of CO2 each year. [9] That's a lot of CO2. Since 2013, competition from cheap natural gas—and lack of an effective price on carbon—has led to the closure of five nuclear plants in the US. Six more plants are scheduled for closure by 2025 (although they could operate for decades longer, and they would be cost-effective if we priced the negative externalities of fossil fuel pollution with a carbon fee). These six plants generated nearly 60 million megawatt hours in 2017. That's more than all of U.S. solar panels combined. If those six plants close, domestic CO2 emissions will increase nearly 5%, erasing all recent climate gains from last decade's decline of coal. Here's another way to think about it: if we close just one single aging nuclear plant, Pennsylvania's notorious Three Mile Island, that will mean losing more zero-carbon power than all of the state’s renewable resources—solar, wind, geothermal, and hydro—put together. Retiring nuclear plants in the U.S. without increasing carbon emissions would require a massive transformation of transport sector, for example. If we were to retire U.S. nuclear plants, engineer Elizabeth Ervin estimates that 98.5% of our passenger cars on the road—134 million of them—would have to be eliminated to keep U.S. carbon dioxide emissions from increasing. [10] As editorial staff for the Boston Globe calculated for Massachusetts, in 2017 "solar panels and wind turbines generated less than 5 % of the utility-scale electricity" generated in the state. If the 680 Megawatt Pilgrim reactor is closed as scheduled in 2019, that would "remove in one day more zero-emission electricity production than all the new windmills and solar panels Massachusetts has added over the last 20 years." [11] When nuclear plants have shut in recent years, fossil fuel emissions have increased. After Southern California Edison retired two reactors at the San Onofre nuclear plant in 2013, California electricity sector emissions rose 24% the next year (Plummer 2016; Kern 2016). When Vermont Yankee closed in 2014, CO2 emissions for the state electricity sector rose 5%. After Fukushima, Japan began shutting down some reactors, their carbon emissions increased nearly 10 %. Germany retired 8 of its 17 reactors after Fukushima, and the decline in its emissions quickly came to a halt. Germany's emissions increased from 2012-2013, fell in 2014, but increased again in 2015. Even with a sustained commitment to bringing new solar and wind online, Germany's decision to shut nuclear plants undermined its climate efforts. [12] Some states, such as California, have negotiated agreements to ensure that nuclear energy is replaced only with renewables. But that does not eliminate the climate hit from closing nuclear plants. In California, "Pacific Gas and Electric has announced that it plans to replace Diablo Canyon with zero-emitting resources, primarily renewables and energy efficiency. The utility has about eight years to prepare for these replacements.” [13] Electricity sector emissions won't go up, but because those renewables are replacing other zero-emissions energy sources rather than high emissions energy sources, California will still be further from meeting its essential goal of zero-emissions energy. Substituting one zero-emissions source with another does nothing to slow climate change. Comparative RisksRadiation is indeed frightening. In ordinary operation, coal plants release 100 times more radiation than the equivalent nuclear reactor—but it’s not ordinary operation that folks are concerned about; it's the risk of a meltdown. Risk is worth interrogating more closely, however. Risk is not just how scary something is. It's defined as hazard (the harm from something) times probability (the chance of that something happening). For example, the hazard of mutant zombies chewing our faces off is vast, but the probability (one trusts) is zero, meaning that the zombie risk is zero. The hazard of a full nuclear meltdown—defined as core damage from overheating—is indeed very high, but not as high as most of my students imagine. When I ask my students what would happen if one of our three nuclear power stations here in Michigan went into full meltdown, they make wild guesses: "the complete eradication of all biodiversity in North America? Our state uninhabitable for the next 100,000 years?" These are way off. To better evaluate the hazard from a meltdown, we look at the worst case scenarios calculated for Fukushima. If TEPCO had entirely evacuated the Daiichi plant during the disaster, full meltdowns at all 4 reactors would have occurred, and if that had necessitated the evacuation of personnel from neighboring nuclear plants, those could have melted down as well. The U.S. military and State Department modeled these worst case scenarios during the crisis, because they had to figure out which people should be evacuated in case the worst happened. Doubting TEPCO and the Japanese government's figures, they modeled even worse possibilities, calculating how far radiation could have travelled, given prevailing winds. While media speculation at the time centered on the potential evacuation of Tokyo, the U.S. modelers calculated that even if the worst case scenario had come true, "there was no plausible scenario in which Tokyo, Yokosuka, or Yokota could be subject to dangerous levels of airborne radiation." As Jeffrey Bader explains in Foreign Affairs, Lawrence Livermore National Laboratory modeled "simultaneous meltdowns at one or more reactors and complete drainage of the spent fuel pools at two reactors. The results for such worst-case scenarios, assuming unfavorable wind patterns from the reactor site and a lack of precipitation, suggested that radioactive plumes in excess of EPA standards would not reach within 75 to 100 miles of Tokyo." [14] Yes, that represents an enormous hazard, even if it doesn't mean that entire continents would be rendered uninhabitable. Chernobyl was an even worse disaster, magnified by poor Soviet nuclear design and worse maintenance. The 18-mile radius exclusion zone around Chernobyl is still off limits for permanent residents, and the Zone of Alienation has reached 1000 miles. 70% of the fallout landed on Belarus, contaminating 25% of the country. People who work in the exclusion zone must rotate in and out to limit radiation exposure, and extremely toxic hot spots persist. [15] Entire continents may not be rendered uninhabitable by a meltdown, but the hazards are terrible for those exposed, cast out from their homes, suffering possible radiation-related illnesses. But while these hazards are high, the probability of them recurring is much lower than the probability of climate change. The nuclear industry has experienced 3 partial meltdowns in 17,000 cumulative reactor-years of commercial operation, which translates to a 0.018% probability of any given reactor having a serious accident in any given year. That's a significant probability for something with such a high hazard, which means the risk is real. The industry is very good at responding to historical disasters and designing new safety systems to lessen the risk of the same accident occurring twice, but probably less good at anticipating new things that can and will go wrong in such complex systems. These are real risks, and dismissing them as negligible is not persuasive to most people—certainly not to the 81% of Mexicans who oppose nuclear energy production, or the 63% of Canadians who oppose it, or the 48% of Americans who oppose it. [16] Concerns about meltdowns are matched by anxiety about waste storage for many people who oppose nuclear. Long-term waste storage for high level nuclear waste is a huge cost issue. Technical solutions exist for the containment of high-level waste, as Finland's current project to build the world's first high-level, long-term waste storage facility shows—but most countries have been unwilling to pay the necessary costs. Finland's project costs more than $5 billion. But Finland has a negative externalities law for industries, so the company pays, not the public. In comparison, the environmental and health costs of coal mining in the U.S. alone—which the companies do not have to pay for— are at least $345 billion/year, dwarfing the costs of long term nuclear storage. [17] To make intelligent decisions about energy portfolios in the context of climate change, we need to compare nuclear's risks to the risks of coal pollution & climate deaths. Consider Michigan, I tell my students, most of whom are from the state. In 2015, 30% of our electricity came from nuclear and 50% from coal. Do these coal plants present risks as great as our nuclear plants? Students typically assume not. They guess that historically, global deaths from nuclear disasters and radiation exposure have been much higher than global deaths from coal. But they are off by 400-fold. For every person that has ever died in a nuclear accident or from long term radiation exposure, more than 400 have died from coal. [18] Coal represents a kind of "slow violence," in Rob Nixon's evocative phrase, so it's largely underestimated. [19] Because there's not a single crippling accident that captures the world's attention, the hazards of coal combustion are often invisible to most Americans. But they are enormous. Nine million people died premature deaths from pollution in 2015, mostly from air pollution. 85% of that airborne pollution comes from fossil fuel and biomass combustion, mostly from coal. Coal kills millions each and every year—even ignoring the risks of climate change. [20] One way to run these numbers is to compare mortality rates per trillion kWhr of energy production (this includes deaths, both direct and indirect, from Chernobyl and Fukushima, using the highest estimates of deaths from radiation exposure). James Conca's analysis compares the figures for different countries, and show that coal from China (75% of China's electricity) has a mortality rate of 170,000 deaths per trillion kWhr. Roof top solar sees 440 deaths (installers fall off roofs) and wind turbines cause 150 deaths. In the United States, nuclear has led to 0.1 deaths per trillion kWhr. Worldwide, including Chernobyl and Fukushima, indirect deaths from radiation exposure and increased cancer risk including direct deaths, nuclear's history has witnessed 90 deaths per trillion kWhr of energy production. Indigenous peoples have disproportionately borne the risks from mining and processing of uranium. But shutting down nuclear wouldn't solve the environmental justice problem, because continued reliance on fossil fuels also disproportionately hurts the poor. Globally, 92% of the 9 million people who died from pollution in 2015 lived in lower income countries or communities. [21] Senator Cory Booker noted: "My city [Newark] has asthma rates for our children that are epidemic, about three to four times what they are in other communities. I know what the urgencies are here in the immediate, right now … I also know we can't get there unless we substantially support and even embolden the nuclear energy sector. We've got to support the existing fleet." [22] When comparing risks from fossil fuels versus other energy sources, we also need to factor in the risks from runaway climate change. These are harder to measure with certainty. One estimate figures 14 million additional deaths from heat-related illness alone if temperatures rise 4º—which is what will happen if we continue business as usual. [23] Another study estimates more than half a million excess deaths from reduced food production alone, by 2050. [24] The worst-case scenario? Take a look at the end-Permian mass extinctions 252 million years ago. Emissions of large amounts of CO2 led to the extinction of 90% of life in the ocean and 75% on land. As Peter Brannen writes, "Today the consequence of quickly injecting huge pulses of carbon dioxide into the air is discussed as if the threat exists only in the speculative output of computer models. But, as scientists have discovered, this has happened many times before, and sometimes the results were catastrophic." We are now releasing CO2 at 10 times the rate that sparked the end-Permian. The hazard of runaway climate change? Existential. The probability? Unfortunately, it's higher than the probability of another nuclear reactor meltdown, given the fact that our greenhouse gas emissions are exponentially increasing. In the future, we hope that conservation programs will lead to a significant reduction in power demands, allowing renewables with batteries for backup and microgrids for resilience to supply the globe's power needs. But even that hopeful vision isn't fully sustainable, because renewables involve significant mining of non-renewable resources. Every energy source—include solar, wind, and geothermal—creates mining waste and greenhouse gas emissions from the full life cycle (which includes mining, processing, transport, energy production, and waste storage). Life cycle analyses show that coal generates 1000 grams of CO2 per kWh. Solar generates 58 grams—much less than coal, but more than wind and nuclear at 5 grams of CO2 per kWh. [25] Even conservation, laudable as it is, has a mining and greenhouse gas footprint, because it typically involves the production of foam or cellulose insulation, which includes some polystyrene with all of its associated plastic ills. Nuclear is not classified as renewable because its energy source is mined; but does it really require more mining per kWh of energy produced than solar or wind? I didn't have time to track down these figures, but it's worth considering the question. If we use the carbon footprint of mining as a rough proxy for disturbance from mining, then nuclear is on par with wind, and less problematic than coal and natural gas. My broader point is that assigning simple categories to energy sources such as “renewable vs. non-renewable" or "sustainable vs. non-sustainable" is problematic. Every energy source involves the mining of non-renewable resources such as copper, nickel, and zinc. Every energy source creates some greenhouse gas emissions during its full life cycle. But wind, solar, geothermal, and nuclear are orders of magnitude below fossil fuels, and controlling runaway climate change requires all of them right now, with renewables scaling up as quickly as possible and nuclear giving us time for that to happen. Thorium and Next-Gen Plant DesignsWhile I'm no eco-modernist, I am intrigued by emerging technologies such as thorium and next-gen plants that use a cradle-to-cradle design philosophy to re-imagine used fuel not as waste, but rather as a generative source of power for new energy. Yes, light water reactors (LWR) can be made much safer than the older designs now in use. But no amount of tweaking will overcome the fact that light water reactors, which rely on water to prevent meltdowns, are inherently poor designs for commercial energy production. They were designed for nuclear submarines, where a water-cooled design had a fail-safe backup in case of power failure. Far better, safer fuels exist for nuclear plants, such as thorium. Thorium is an element abundant in the U.S. and Canada, and while radioactive like uranium, it presents far fewer mining, power generation, and waste storage risks. 99% of mined thorium can produce energy, compared to uranium, of which only 1% creates energy and the other 99% becomes radioactive waste. One ton of thorium can produce as much energy as 35 tons of uranium. What all this means is that there are two orders of magnitude less radioactive waste from thorium than from uranium. Meltdown risks are negligible, because thorium-fueled molten salt reactors are self-cooling in case of disaster, not relying on water to stop a meltdown. [26] Spent fuel cannot be weaponized easily or seized by terrorists. Thorium, in other words, eliminates many of the potential hazards from conventional nuclear. So why aren't we using this miracle fuel for nuclear plants? Uranium is a legacy of the Cold War. We once had a functional thorium reactor developed at the Oak Ridge National Laboratory during the late 1960s. It ran for five years "before being axed by the Nixon administration. The reason for its cancellation: it produced too little plutonium for making nuclear weapons. Today, that would be seen as a distinct advantage. Without the Cold War, the thorium reactor might well have been the power plant of choice for utilities everywhere." [27] At the time, one key purpose of our nuclear energy program was to supply our nuclear weapons program. Now of course, thorium's Cold War bug has become its feature, but we are woefully behind in thorium research. The Netherlands started up a proof-of-concept thorium reactor in 2017, the first one in several decades, and both India and China are busy researching new designs. Significant research and regulatory hurdles remain before thorium reactors can replace uranium reactors, so like scaled-up solar and wind storage, it's not going to be online soon enough to prevent 2º of warming. Over the longer haul, thorium might give us a little breathing space to make fundamental political and structural changes, but it doesn't obviate the need for those changes. To illustrate this, I show my students the opening clips of Okkupert, a Norwegian TV series that starts with a hurricane powered by climate change that kills thousands of Norwegians. Shocked, the citizens elect as Prime Minister the leader of the Green Party, and he promptly shuts down the flow of North Sea oil. To provide Norway's domestic energy needs, he opens a nuclear plant powered by thorium. Cutting off the flow of oil destabilizes the power relations that have developed around North Sea oil. Russia invades Norway to turn the oil back on, and the EU and the U.S. simply watch, eager for their share. [28] New energy sources don't erase generations of political balancing acts around fossil fuels, in other words. Politics don't vanish when the fossil fuels get turned off. Conservation has enormous potential to help us reduce fossil fuel emissions, but on its own, it won't get us to where we need to be fast enough. As recently as 2012, more than 1 billion people lacked access to electricity—15% of the world's population. 41% of the world's population, or 2.9 billion people, lacked access to safe cooking fuels. [29] Electricity provision helps ensure a host of other goals: education for girls, independent incomes, safer indoor air, tangible health benefits, etc. Even with a 50% reduction of energy use in more-developed nations such as the U.S. where nearly 66% of energy is wasted, global energy demands will continue to rise. In the US, according to the Sankey diagrams of energy flows produced by Lawrence Livermore Laboratory, there were 66.7 quads (one quadrillion BTUs) of "rejected energy" in 2017 compared to 31.1 quads of energy services. This means that we're wasting 68% of the energy produced, particularly in transportation (where 79% of energy is wasted) and electricity generation (where 66.4% of energy is wasted). [30] The second law of thermodynamics means that some energy will always be wasted—but nonetheless, there's enormous scope for effective conservation in the US. And again, we are heading in the wrong direction: in 1970, Americans wasted 49% of energy, far less than we waste today. [31] The problem is structural, not personal behavior. It’s not just that we're doing a poor job insulating our houses or changing to LED lightbulbs. As David Roberts points out, "at a deeper level, waste is all about system design. The decline in overall efficiency in the U.S. economy mainly has to do with the increasing role of inefficient energy systems. Specifically, the years since 1970 have seen a substantial increase in electricity consumption and private vehicles for transportation, two energy services that are particularly inefficient." [32] Better designed urban systems, better electricity systems, better transport systems: all will go a long way toward decreasing carbon emissions. New solar-mega farms, like the one being constructed in Morocco that will eventually power a million households, will someday help us meet the globe's new energy needs, possibly with the help of thorium. Until that day, maintaining existing nuclear plants, problematic as they are, is essential. Prof. Nancy Langston is an environmental historian who explores the connections between toxics, environmental health, and industrial changes in Lake Superior and other boreal watersheds. [1] Traci Brynne Voyles, Wastelanding: Legacies of Uranium Mining in Navajo Country, (Minneapolis: Univ Of Minnesota Press, 2015). [2] Nuclear Vault, Duck And Cover (1951) Bert The Turtle https://www.youtube.com/watch?v=IKqXu-5jw60. [3] Diane Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, (Chicago: University of Chicago Press, 2016). [4] Rens Van Munster and Casper Sylvest, “Pro-Nuclear Environmentalism: Should We Learn to Stop Worrying and Love Nuclear Energy?,” Technology and Culture 56, no. 4 (2015): 789–811. [5] Eric Holthaus, “It’s Time to Go Nuclear in the Fight against Climate Change,” Grist, January 12, 2018, https://grist.org/article/its-time-to-go-nuclear-in-the-fight-against-climate-change/. [6] Bready Dennis and Chris Mooney, “‘We Are in Trouble.’ Global Carbon Emissions Reached a Record High in 2018,” Washington Post, December 5, 2018, https://www.washingtonpost.com/energy-environment/2018/12/05/we-are-trouble-global-carbon-emissions-reached-new-record-high/. [7] Jeremy Berke, “One Simple Chart Shows Why an Energy Revolution Is Coming — and Who Is Likely to Come out on Top,” Business Insider, May 8, 2018, https://www.businessinsider.com/solar-power-cost-decrease-2018-5. [8] Rebecca Kern, “As U.S. Nuclear Plants Close, Carbon Emissions Could Go Up” (Bloomberg Environment & Energy Report, 2016), https%3A%2F%2Fwww.bna.com%2Fus-nuclear-plant-n73014445640%2F. [9] “Environmental Impacts,” accessed December 3, 2018, https://sites.psu.edu/jensci/2014/02/26/environmental-impacts/. [10] Elizabeth Ervin, “Nuclear Energy Statistics,” accessed December 3, 2018, https://www.google.com/search?q=mining+required+per+MW+of+energy+produced+by+nuclear&oq=mining+required+per+MW+of+energy+produced+by+nuclear&aqs=chrome..69i57.8179j0j4&sourceid=chrome&ie=UTF-8. [11] Editorial staff, “Retiring More Nuclear Plants Could Hurt Mass. Climate Goals - The Boston Globe,” Boston Globe, June 2, 2018, https://www.bostonglobe.com/opinion/editorials/2018/06/02/retiring-more-nuclear-plants-could-hurt-mass-climate-goals/z0PRjeQPr0TIVBtsYB7rpI/story.html. [12] Kern, “As U.S. Nuclear Plants Close, Carbon Emissions Could Go Up.” [13] Kern. [14] Jeffrey A. Bader, “Inside the White House During Fukushima,” Foreign Affairs, March 8, 2012, https://www.foreignaffairs.com/articles/americas/2012-03-08/inside-white-house-during-fukushima. [15] Luke Spencer, “12 Facts About Chernobyl’s Exclusion Zone 30 Years After the Disaster,” Mental Floss, April 26, 2016, http://mentalfloss.com/article/78779/12-facts-about-chernobyls-exclusion-zone-30-years-after-disaster. [16] Hannah Ritchie, “It Goes Completely against What Most Believe, but out of All Major Energy Sources, Nuclear Is the Safest,” Our World in Data (blog), July 24, 2017, https://ourworldindata.org/what-is-the-safest-form-of-energy. [17] Henry Fountain, “Finland Works, Quietly, to Bury Its Nuclear Reactor Waste,” The New York Times, June 9, 2017, https://www.nytimes.com/2017/06/09/science/nuclear-reactor-waste-finland.html. [18] Ritchie, “It Goes Completely against What Most Believe, but out of All Major Energy Sources, Nuclear Is the Safest.” [19] Rob Nixon, Slow Violence and the Environmentalism of the Poor (Cambridge, Mass: Harvard University Press, 2011). [20] Philip J. Landrigan et al., “The Lancet Commission on Pollution and Health,” The Lancet 391, no. 10119 (February 3, 2018): 462–512, https://doi.org/10.1016/S0140-6736(17)32345-0. [21] Landrigan et al. [22] James Conca, “How Deadly Is Your Kilowatt? We Rank The Killer Energy Sources,” Forbes, June 10, 2012, https://www.forbes.com/sites/jamesconca/2012/06/10/energys-deathprint-a-price-always-paid/. [23] Greg Ip, “Adding Up the Cost of Climate Change in Lost Lives,” Wall Street Journal, August 1, 2018, sec. Economy, https://www.wsj.com/articles/adding-up-the-cost-of-climate-change-in-lost-lives-1533121201. [24] Marco Springmann et al., “Global and Regional Health Effects of Future Food Production under Climate Change: A Modelling Study,” The Lancet 387, no. 10031 (May 7, 2016): 1937–46, https://doi.org/10.1016/S0140-6736(15)01156-3. [25] MIT Energy, “The Future of Nuclear Energy in a Carbon-Constrained World,” 2018. [26] Nicolas Cooper et al., “Should We Consider Using Liquid Fluoride Thorium Reactors for Power Generation?,” Environmental Science & Technology 45, no. 15 (August 1, 2011): 6237–38, https://doi.org/10.1021/es2021318. [27] Babbage, “The Nuke That Might Have Been,” The Economist, November 11, 2013, https://www.economist.com/babbage/2013/11/11/the-nuke-that-might-have-been. [28] Erik Skjoldbjærg, “Okkupert/Occupied,” 2015, https://www.netflix.com/title/80092654. [29] World Bank, “Sustainable Development Goal on Energy (SDG7) and the World Bank Group,” World Bank, May 26, 2016, http://www.worldbank.org/en/topic/energy/brief/sustainable-development-goal-on-energy-sdg7-and-the-world-bank-group. [30] Lawrence Livermore National Laboratory, “LLNL Flow Chart 2017 and 1970,” https://flowcharts.llnl.gov/. [31] David Roberts, “American Energy Use, in One Diagram,” Vox, April 17, 2017, https://www.vox.com/energy-and-environment/2017/4/13/15268604/american-energy-one-diagram. [32] Roberts. Professors Jim Clifford, Dagomar Degroot, and Daniel Macfarlane. This is the introductory post to a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?”. It is hosted by the Network in Canadian History and Environment, the Climate History Network, and ActiveHistory.ca. Is nuclear power a saving grace - or the next step in humanity’s proverbial fall from grace? This series focuses on what environmental and energy historians can bring to discussions about nuclear power. It is a tripartite effort between Active History, the Climate History Network (CHN), and the Network in Canadian History and Environment (NiCHE), and will be cross-posted across all three platforms. Reflecting this hydra-headed approach, this series is co-edited by a member of each of those websites: Jim Clifford (Active History), Dagomar Degroot (CHN/HistoricalClimatology.com), and Daniel Macfarlane (NiCHE). Why a series on historians, nuclear power, and the future? After all, predicting the future is pretty much a fool’s errand, and one that historians tend to avoid. But this isn’t so much about prognosticating what is to come as using the knowledge and wisdom of history to inform dialogue about the present and future. It all started on Twitter, as these things often do. Daniel Macfarlane was tweeting back and forth with Sean Kheraj about some energy history books they had recently read. Daniel was lamenting that one ended with an arrogant screed about how nuclear energy was the only hope for the future, and anyone who didn’t think so was deluded. This led them to wonder - on Twitter, mind you - what environmental historians, and those who studied energy history in particular, thought of nuclear energy’s prospects. Some other scholars, many of whom will be represented in this series (Dagomar Degroot, Andrew Watson, Nancy Langston, Robynne Mellor), began chiming in online. The exchanges remained very collegial, but it was clear that there were some sharply diverging positions. This mirrored the stark divides one often finds among environmentalists and environmental studies students. To some, nuclear energy is just another dead end, like fossil fuels; to others, it offers humanity its only real hope of addressing climate change. The three editors of this series themselves project differently across a spectrum running from anti-nuclear to pro-nuclear, with an in-between that might best be called anti-anti-nuclear. Daniel Macfarlane is decidedly a nuclear pessimist, Dagomar Degroot sees an enduring role for nuclear fission on a limited scale, and Jim Clifford is not sure how to engage the nuclear debate within the context of continued inaction on carbon emissions. Each of the three will explain their basic positions below (in the first-person voice for the sake of coherence). Daniel Macfarlane: In my opinion, any energy and economic system that continues to foster our consumption and lifestyles are part of the problem. As Nancy Langston will show in this series, nuclear power is undoubtedly better than coal. But the belief that we can continue with the same standards of living are dangerous - the addiction to growth and consumption is the major driver of ecological problems (I’m firmly in the Prophet, rather than the Wizard, camp, a dyad which Andrew Watson will explicate in his contribution to this series). And nuclear energy fosters that addiction, on top of the threats from nuclear waste (which Robynne Mellor will discuss), nuclear fallout (the focus of Toshihiro Higuchi’s contribution), and nuclear accidents (which Kate Brown will address). I don’t think fully switching to nuclear power would solve our climate change problems. Nuclear power isn’t likely to stop us from driving cars and flying jets; from covering the earth in concrete, overconsuming meat, and having children. The only real energy solution is a drastic, huge reduction in energy consumption, primarily by the industrial and commercial sectors as well as those of us in the middle class and above within the “developed” world. Nuclear power is a panacea, a magical silver bullet that will allow us to have our cake and eat it too (in this case, I literally mean cake, as well as all the other consumer products we want). It would mean that we wouldn’t have to change our lifestyles. In that sense, switching to nuclear power is kind of the energy systems equivalent of banning plastic straws. If it is the first step in a long, long line of progressively harder steps, then great; but if it becomes the end in itself, a panacea that leads us to think that we’re doing enough and we can rest on our laurels, then it is an obstacle (for the record, I use metal straws - but getting rid of plastics straws alone isn’t going to make a noticeable dent in our plastics problem). The problem is our current systems - economic, political, and social - and nuclear energy is just going to prop up the problem. Today’s nuclear advocates sound an awful lot to me like the advocates of coal, petroleum, and hydropower from the past that I’ve researched - whose ranks often featured well-intentioned, educated, progressive, and preservationist-inclined folks. The history of energy transitions suggests that proponents and boosters of new energy forms are generally wrong about the hidden costs - why would nuclear be any different? Historians of all people should know about humanity’s propensity to irrationality stress the positives and downplay the negatives. If all of our other modern energy forms have been booby-trapped with major drawbacks once scaled up, why would nuclear be free of similar problems? I'm wary of any large-scale energy systems because of the rule of unintended consequences. I mean, when we started burning coal in the 19th century, who the heck thought that it could change the climate?! Who thought that hydropower reservoirs would concentrate mercury and emit methane? And none of these things are capable of being used as weapons of unimaginable mass destruction, or when decaying are hazardous to the health of living organisms for eons. All technologies bite back, and the bigger the technological system, the bigger the bite. Dagomar Degroot: I’ll begin by admitting that I’m deeply sympathetic to Daniel’s point of view. Clearly, efforts should be made on every level – individual, municipal, national – to conserve energy and reduce consumption. Yet I am more skeptical than Daniel about the desirability, morality, and practicality of slashing our use of energy, and in turn our standards of living. My view is that most of our environmental challenges stem from inefficient and often immoral political and economic systems: systems that promote inequality and ignore environmental costs. Engines of consumption and exploitation that privilege the whims of the privileged over the needs of the majority now promote the wholesale destruction of tropical forests, the exhaustion of the oceans, the pollution of the atmosphere – all the myriad interconnected environmental perils of the Anthropocene. In these processes, the core problem is not precisely how much energy we use as a species, but rather how we generate energy and then how we use it for industry, agriculture, and transportation. Today, governments choose to promote fossil fuels over cleaner alternatives, not only because industries that produce fossil fuels have disproportionate political power, but also because our built environment – from cars to sprawling cities – has historically reflected and demanded the use of fossil fuel technologies. It doesn’t have to be this way. What we need are revolutionary policies and technologies that lead us to use energy more efficiently, to build with minimal carbon emissions, to generate energy cleanly, to promote social equality, and to privilege – above all else – environmental sustainability. Even in the developed world, policies that sharply reduce standards of living are to my mind unnecessary, even if they were politically viable (they aren’t). In the developing world, energy consumption will actually need to go up, lest millions remain consigned to desperate poverty. An important question for me is: how can we increase our consumption of energy while sharply reducing the environmental impact of energy consumption? Renewable energy is booming, and nuclear fission reactors are a far less appealing energy source than, for example, solar power plants or wind farms. If truly transformative technologies – such as controlled fusion – ever get off the ground, nuclear fission will be even less competitive. Fission reactors are costly and time-consuming to build, and some designs at least have turned out to be unsafe. As you will read, they also come with a host of unique problems. Yet at present, renewable energy alternatives cannot generate power on sufficient scales – with sufficient consistency – for every community. In all probability, we will need to construct new nuclear fission plants in order to reduce our carbon emissions quickly enough to avert truly catastrophic climate change. And we should be especially wary of decommissioning older nuclear fission reactors. Given the present limitations of renewable energy, those reactors are too often replaced by coal or natural gas. Yes, an enduring role for nuclear fission power is an unsavory prospect. But climate change on a scale that makes large parts of the Earth uninhabitable is considerably worse. Jim Clifford: I find a lot to agree with in both Daniel and Dagomar's contributions. I will use my space to build on Dagomar’s quip that political, social and economic change on the scale necessary to dramatically reduce global energy consumption during the 2020s is not possible. As a historian who has studied social and political change in the face of deep environmental challenges during the nineteenth century and who has taught early twentieth-century European history for the past six years, I think the socio-economic and cultural optimism of the Prophets is perhaps more unrealistic than the techno-optimism camp. Wizards and engineers can point to significant developments in the electrification of transportation, the various ways to dramatically increase solar and wind capture and storage, along with the plans for saltwater biofuels and promising new nuclear technologies. What evidence do we have of rapid progress towards an empowered grassroots democracy that suggests we can upend our culture and convince people to accept a significant reduction in their standards of living in the next decade? The riots in France are in part a reaction to a relatively minor effort to reduce people’s diesel fuel consumption; the Ontario Liberals’ tepid embrace of green energy helped bring a populist into office to dismantle much of what they accomplished; Australia has yet another prime minister after a government collapsed for trying to bring about small changes; and Justin “Canada is back" Trudeau bought a pipeline. So how do we achieve social, political, and cultural change on a global scale in a very short period of time? Individuals electing not to fly, having fewer children, or biking to work are not going to make a significant dent. We need societal change across the industrialized west. How in a democracy do we achieve this goal? I don’t see the power of the wealthy elite diminishing significantly or somehow transitioning away from capitalism on a global scale while maintaining enough stability to rapidly transition to a lower standard of living in a peaceful manner. We might end up with a global war that could devastate the global economy and our standard of living, but obviously, this is not the pathway we want to follow to solve the crisis, as it would just accelerate ecological destruction and human suffering. All of this is to say, when presented with the binary, I think the techno-optimism and the growing momentum for a Green New Deal to build this infrastructure, create jobs, and maintain middle-class standards of living is the most viable path in the short to medium term. I hope our culture will start to quickly shift in response to the populist moment we are living through and we can aim for a hybrid between the two approaches in the mid-century when fear sets in and the generation who come of age watching California burn reject the culture of their parents and grandparents. And we need Prophets to imagine a transition away from today’s consumption focused culture and to provide alternatives for people to embrace at some point in the future. In the meantime, if there is the local political will to invest billions of dollars in a few more nuclear energy plants, I don’t expect this will go very far to solve the problem or create a dramatic increase in the scale of the environmental risk our children face. Series Calendar
All of contributors fall somewhere on this spectrum. We have purposefully sought out scholars with various viewpoints, and attempted to feature a diverse set of contributors. Below is the schedule for our first five posts, which have already been written. However, we are leaving the series open-ended - that is, we hope the posts will spark conversations and debates, and should any reader feel inclined to contribute their own post in response to the series, we are open to the possibility of adding more posts. January 30: Nancy Langston, “Closing Nuclear Plants Will Increase Climate Risks.” February 13: Kate Brown, “Next generation nuclear?”. February 27: Andrew Watson, “Only Dramatic Reductions in Energy Use Will Save the World From Climate Catastrophe: A Prophecy.” March 13: Robynne Mellor, "The Cold War Constraints of the Nuclear Energy Option." March 27: Toshihiro Higuchi, "The Nuclear Renaissance in a World of Nuclear Apartheid." Prof. David J. Nash, University of Brighton, UK, and University of the Witwatersrand, South Africa To grasp the significance of global warming, and to confirm its connection to human activity, you have to know how climate has changed in the past. Scholars of past climate change know that understanding how climate has varied over historical timescales requires access to robust long-term datasets. This is not a problem for regions such as Europe and North America, which have a centuries-long tradition of recording meteorological data using weather instruments (thermometers, for example). However, for large areas of the world the ‘instrumental period’ begins, at best, in the late 19th or early 20th century. This includes Africa, where, with the exception of Algeria and South Africa, instrumental data for periods earlier than 1850 are sparse. To overcome such data scarcity, other approaches are used to reconstruct past climates, most notably through analyses of accounts of weather events and their impacts in historical documents. Compared to the wealth of documentary evidence available for areas such as Europe and China, there are relatively few collections of written materials that allow us to explore the historical climatology of Africa. Documents in Dutch exist from the area around Cape Town that date back to the earliest European settlers in 1652, and Arabic- and Portuguese-language documents from northern and southern Africa, respectively, are likely to include climate perspectives from even further back in time. However, the bulk of written evidence for Africa stems from the late 18th century onwards, with a proliferation of materials for the 19th century following the expansion of European colonial activity. These documents are increasingly used by historical climatologists to reconstruct sequences of rainfall variability for the African continent. This focus on rainfall isn’t surprising, given that rainfall was – and is – critical for human survival. As a result, people tended to write about its presence or absence in diaries, letters, and reports. In turn, these rainfall reconstructions are now used by historians as a backdrop when exploring climate-society relationships for specific time periods. It is therefore critical that we understand any issues with rainfall reconstructions in case they mislead or misinform. This article will take you under the hood of the practice of reconstructing past climate change. Its aim is to: (a) provide an overview of historical climatology research in Africa at continental to regional scales; and (b) point out how distinct approaches to rainfall reconstruction in different studies can potentially produce very different rainfall chronologies, even for the same geographical area (which of course alters the kinds of environmental histories that can be written about Africa). The article concludes with some personal reflections on how we might move towards a common approach to rainfall reconstruction for the African continent. Different approaches to rainfall reconstruction in Africa Most historical rainfall reconstructions for Africa use evidence from one or more source type (Figure 1). A small number of studies are based exclusively upon early instrumental meteorological data. Of these, some (the continent-wide analysis by Nicholson et al. in 2018, for example) combine rain gauge data published in 19th-century newspapers and reports with more systematically collected precipitation data from the 19th to 21st centuries, to produce quantitative or semi-quantitative time series. Others, such as Hannaford et al. (2015), for southeast Africa, use data digitized from ship logbooks to generate quantitative regional rainfall chronologies. Most reconstructions, however, draw on European traditions by using narrative accounts of weather and related phenomena contained within documentary sources (such as personal letters, diaries/journals, reports, newspapers, monographs and travelogues) to develop semi-quantitative relative rainfall chronologies. Some of the most widely available materials are those written by early explorers, missionaries, and figures of colonial authority. The use of such evidence permits the reconstruction of rainfall for periods well before the advent of meteorological data collection. The greatest numbers of regional documentary-based reconstructions are available for southern Africa, which forms the focus of this article. These draw on documentary evidence from a combination of published and unpublished sources, often using available instrumental data for verification and calibration, and span much of the 19th century. Where information density permits, it has been possible to reconstruct rainfall variability down to seasonal scales (see, for example, a study by Nash et al. in 2016). There are, in addition, continent-wide series that integrate narrative information from mainly published sources with available rainfall data (Nicholson et al., 2012, for 90 homogenous rainfall regions across mainland Africa). An important point to note is that the various reconstructions adopt slightly different methodologies for analyzing documentary evidence. For example, all of the regional studies in southern Africa noted above use a five-point scale to classify annual rainfall (from –2 to +2; extremely dry to extremely wet). Scholars decide how to classify a specific rainy season in a region through qualitative analysis of the collective documentary evidence for that season. In other words, they take into account all quotations describing weather and related conditions. This contrasts with the approach used by Nicholson and colleagues in a 2012 continent-wide rainfall series. In that reconstruction, scholars attributed a numerical score on a seven-point scale (–3 to 3) to each individual quotation according to how wet or dry conditions appear to have been. They then summed and averaged the scores for each item of evidence for a specific region and year. As we will see, these distinct analytical approaches, which may draw on different documentary evidence, may introduce significant discrepancies between rainfall series. Comparisons between rainfall series A compilation of all the available annually-resolved rainfall series for mainland southern Africa is shown in Figure 2. This includes seven series (g-m) based exclusively on documentary evidence, four regional series (c-f) from Nicholson et al. (2012) based on combined documentary evidence and rain gauge data, the 19th-century portion of the ships’ logbook reconstruction series (b) by Hannaford et al. (2015), and, for comparison, the 19th-century section of a width-based tree ring rainfall reconstruction (a) for western Zimbabwe by Therrell et al. (2006). With the exception of the Cape Winter Rains series, all are for areas of southern Africa that receive rainfall predominantly during the summer months. Fig. 2. Annually-resolved rainfall reconstructions for southern Africa, spanning the 19th century. (a) Tree-ring width series by Therrell et al. (2006); (b) Ships’ logbook-based reconstructions by Hannaford et al. (2015); (c-f). Combined documentary and rain-gauge reconstructions by Nicholson et al. (2012); (g-m) Documentary-based reconstructions by (g) Nash et al. (2018), (h) Grab and Zumthurm (2018), (i) Kelso and Vogel (2007), (j) Nash and Endfield (2002, 2008), (k) Nash and Grab (2010), (l) Nash et al. (2016), (m) Vogel (1989). This compilation shows that, in the 19th century, rainfall varied from place to place across southern Africa. However, we can identify a number of droughts that affected large areas of the subcontinent. Droughts, for example, stretched across southern Africa in the mid-1820s, mid-1830s, around 1850, early-mid-1860s, late-1870s, early-mid-1880s and mid-late-1890s. We can also pinpoint a smaller number of coherent wetter years: in, for example, the rainy seasons of 1863-1864 and 1890-1891. Analyses that use many different climate "proxies" - that is, sources that register but do not directly measure past climate change - indicate that the early-mid 1860s drought was the most severe of the 19th century, and that of the mid-late-1890s the most protracted (see, for example, studies by Neukom et al., 2014, and Nash, 2017). The inset map in Figure 2 reveals that a number of rainfall series overlap in their geographical coverage, which allows a direct comparison of results. In some cases, the overlap is between series created using very different methodologies. For the most part, there is good agreement between these overlapping series, but there are some significant differences. The rest of this article will focus on two of these periods of difference: the 1810s in southeast Africa, and the 1890s in Malawi. How dry was the first decade of the 19th century in southeast Africa? Four rainfall series are available for southeast Africa for the first decade of the 19th century (Figure 3) – documentary series for South Central Africa and the Kalahari (by Nicholson et al., 2012), a tree-ring series for Zimbabwe (Therrell et al., 2005), and a ships’ log series for KwaZulu-Natal (Hannaford et al., 2015). Collectively, these series suggest that there was at least one major drought that potentially affected much of the region. This was a very important time in the history of southeast Africa. The multi-year drought is remembered vividly in Zulu oral traditions as the ‘mahlatule’ famine (translated as the time we were obliged to eat grass). Scholars have seen it as a trigger for political revolution and reorganization, one that ultimately led to the dominance of the Zulu polity. Fig. 3. Comparison of three annually-resolved rainfall reconstructions for southeast Africa for the first half of the 19th century, including the tree ring series for Zimbabwe by Therrell et al. (2006), the combined documentary and rain-gauge reconstructions for South Central Africa and the Kalahari by Nicholson et al. (2012), and the ships’ logbook reconstructions for southeast South Africa by Hannaford et al. (2015). The inset map shows the location of each series. Yet there are some discrepancies between the overlapping records, which have important implications for our understanding of relationships between climate change and society. For example, while the documentary-based South Central Africa series in Figure 3 suggests protracted drought from 1800 to 1811, the overlapping tree ring series for Zimbabwe infers periods of average or above-average rainfall, alternating with drought. A similar contrast is shown between the documentary-based Kalahari series (which encompasses the southern Kalahari but extends to the east coast of South Africa) and the overlapping ships’ logbook-based reconstruction for Royal National Park, KwaZulu-Natal. Since these series are based on different evidence, it is impossible to tell which is more likely to be ‘right’. However, the rainfall series based on documentary evidence are clearly less sensitive to interannual rainfall variability than those based on ships’ log data or tree rings, at least for the early 19th century. This is surprising, as a major strength of documentary evidence is normally the way that it captures extreme events. The reasons for these discrepancies are unclear, but are likely to be methodological. The Africa-wide rainfall series by Nicholson and colleagues, from which the South Central Africa and Kalahari series in Figure 3 are derived, is a model of research transparency – it identifies the evidence base for every year of the reconstruction, with all documentary and other data made available via the NOAA National Climatic Data Center. Inspection of this dataset indicates that the reconstructions for the early 1800s in southern Africa are based on a limited number of published monographs and travelogues, written mainly by explorers. While these are likely to include eyewitness testimonies, there is potential for bias towards drier conditions. The majority of authors were western European by birth and, in some cases, their writings reflected their first travels in the subcontinent. It wouldn’t be at all surprising if they found southern Africa significantly drier than home. How dry was the last decade of the 19th century in Malawi? The collective evidence for rainfall variability around present-day Malawi during the mid-late 19th century is shown in Figure 4. Here, two rainfall reconstructions overlap: the first, a reconstruction for three regions of the country based primarily on unpublished documentary evidence by Nash et al. (2018); and the South Central Africa series and adjacent rainfall zones of Nicholson et al. (2012). Fig. 4. Comparison of two annually-resolved rainfall reconstructions for southeast Africa for the second half of the 19th century, including a documentary-based reconstruction for three regions of Malawi (Nash et al., 2018), and the combined documentary and rain-gauge reconstruction for South Central Africa by Nicholson et al. (2012). The inset map shows the location of each series. Extreme events, such as the droughts of the early-1860s, mid-late-1870s, and mid-late-1880s, and a wetter period centred around 1890-91, are visible in both reconstructions. However, there are discrepancies in other decades, most notably during the 1890s where the Nicholson series indicates mainly normal to dry conditions, and the Nash series a run of very wet years. Delving deeper into the documentary evidence underpinning the Nicholson series suggests that the discrepancies may again be methodological, and strongly influenced by source materials. As with the other regional reconstructions for southern Africa, the Nash study bases annual classifications on average conditions across a large body of mainly unpublished primary documentary materials. Nicholson, by contrast, uses smaller numbers of mainly published documentary materials, combined with rain gauge data. An over-emphasis of references to dry conditions in these documents, combined with an absence of gauge-data for specific regions and years, could therefore skew the results. The way forward? There are two main take-home messages from this article. First, on the basis of a comparison of annually-resolved southern African rainfall series, documentary data appear less sensitive to precipitation variability than other types of proxy evidence, even for some extreme events. Discrepancies are most apparent for periods of the early 19th century, where documentary evidence is relatively sparse. Second, different approaches to reconstruction may produce different results, especially where documentary evidence is combined with gauge data. The summative approach used by Nicholson and colleagues, for example, where individual quotations are classified, summed and averaged, may be much more sensitive to bias from individual sources when data are sparse. Having identified these potential issues, one way forward might be to run some experimental studies using different approaches on the same collections of documentary evidence to assess the impact of methodological variability on rainfall reconstructions. This would be no small task, as it would mean re-analyzing some large datasets. However, it would confirm or dismiss the suggestions made here about the relative effectiveness of different methodologies. These experimental studies would help us to identify the "best practice" for reconstructing African rainfall. They would allow us to improve the robustness of the baseline data available for understanding historical rainfall variability in the continent likely to be most severely impacted by future climate change. They would also permit us to refine our understanding of past relationships between climatic fluctuations and the history of African communities. These relationships may offer some of our best perspectives on the future of African societies in a warming planet. Works Cited:
Brázdil, R. et al. 2005. "Historical climatology in Europe – the state of the art." Climatic Change 70: 363-430. Grab, S.W. and Zumthurm, T. 2018. "The land and its climate knows no transition, no middle ground, everywhere too much or too little: a documentary-based climate chronology for central Namibia, 1845–1900." International Journal of Climatology 38 (Suppl. 1): e643-e659. Hannaford, M.J. and Nash, D.J. 2016. "Climate, history, society over the last millennium in southeast Africa." Wiley Interdisciplinary Reviews-Climate Change 7: 370-392. Hannaford, M.J. et al. 2015. "Early-nineteenth-century southern African precipitation reconstructions from ships' logbooks." The Holocene 25: 379-390. Kelso, C. and Vogel, C.H. 2007. "The climate of Namaqualand in the nineteenth century." Climatic Change 83: 257-380. Nash, D.J., 2017. Changes in precipitation over southern Africa during recent centuries. Oxford Research Encyclopedia of Climate Science, doi: 10.1093/acrefore/9780190228620.013.539. Nash, D.J. and Endfield, G.H. 2002. "A 19th century climate chronology for the Kalahari region of central southern Africa derived from missionary correspondence." International Journal of Climatology 22: 821-841. Nash, D.J. and Endfield, G.H. 2008. "'Splendid rains have fallen': links between El Nino and rainfall variability in the Kalahari, 1840-1900." Climatic Change 86: 257-290. Nash, D.J. and Grab, S.W. 2010. "'A sky of brass and burning winds': documentary evidence of rainfall variability in the Kingdom of Lesotho, Southern Africa, 1824-1900." Climatic Change 101: 617-653. Nash, D.J. et al. 2018. "Rainfall variability over Malawi during the late 19th century." International Journal of Climatology 38 (Suppl. 1): e649-e642. Nash, D.J. et al. 2016. "Seasonal rainfall variability in southeast Africa during the nineteenth century reconstructed from documentary sources." Climatic Change 134: 605-619. Neukom, R. et al. 2014. "Multi-proxy summer and winter precipitation reconstruction for southern Africa over the last 200 years." Climate Dynamics 42: 2713-2716. Nicholson, S.E. et al. 2012. "Spatial reconstruction of semi-quantitative precipitation fields over Africa during the nineteenth century from documentary evidence and gauge data." Quaternary Research 78: 13-23. Nicholson, S.E. et al. 2018. "Rainfall over the African continent from the 19th through the 21st century." Global and Planetary Change 165: 114-127. Pfister, C. 2018. "Evidence from the archives of societies: Documentary evidence - overview". In: White, S., Pfister, C., Mauelshagen, F. (eds.) The Palgrave Handbook of Climate History. Palgrave Macmillan, London, pp. 37-47. Therrell, M.D. et al. 2006. "Tree-ring reconstructed rainfall variability in Zimbabwe." Climate Dynamics 26: 677-685. Vogel, C.H. 1989. "A documentary-derived climatic chronology for South Africa, 1820–1900." Climatic Change 14: 291-307. |
Archives
March 2022
Categories
All
|