HISTORICALCLIMATOLOGY.COM
  • Home
    • Archived Best of the Web
  • Features
    • Archived Features
  • Interviews
    • Climate History Podcast
  • Projects
  • Resources
    • Tools
    • Databases >
      • CLIWOC
    • Bibliography
    • Videos
    • Links
    • Tipping Points
  • Network
    • On Facebook
  • About
    • Our Team
    • Definitions

The good, bad, undefined Little Ice Age

6/3/2020

 
Dr. Heli Huhtamaa, ​University of Bern
Picture
Many of these recent books on the history of climate deal with the Little Ice Age. But what was it, exactly?

​The Little Ice Age (LIA), a climatic phase that overlapped with the late-medieval and early modern periods, increasingly interests historians - academic and popular alike. Recently, they have tied the LIA to the outbreak of wars, famines, economic depressions and overall troublesome times. Not only a handful of climate historians are using the term anymore. These days, perhaps due to the overall increasing climate awareness, the period pops up in history-related print media, teaching, online blogs, and – of course – in academic research, among other places. As the LIA is the last distinctive climatic regime before the contemporary anthropogenic climate change, it can provide important analogues to better understand climate-society relationships on various temporal scales – from the direct consequences of short-term extreme events to multi-centennial dynamics of climate resilience.
 
The term "Little Ice Age" was presumably introduced by François E. Matthes in 1939 when he chronicled the advance and retreat of glacier over the past 4000 years across the present-day United States [1]. Since the late 20th century, hundreds (if not thousands) of paleoclimate reconstructions and models have shed further light on the LIA, and the term has been widely accepted to indicate multi-centennial cooler phase occurring in the Northern Hemisphere during the second millennium CE. However, there the agreement over the universal characteristics of the LIA ends.
 
Even a brief reading of the scientific literature on the topic reveals that there are remarkably differing views on the magnitude of the temperature depression, the spatial extent, as well as the onset and termination of the LIA. Within the discipline of history, economic historians Morgan Kelly and Cormac Ó Gráda authored perhaps the best-known publication that highlights the uncertainties related to the magnitude and dating of the event [2]. They have also used change-point analysis to argue that there is no evidence for any statistically significant change in the mean or variance of European temperatures that could be associated with an onset of the LIA [3]. Furthermore, a recent scientific study demonstrated that is it impossible to assign globally coherent dates for the coldest phases of the LIA [4].

Considering the uncertainties related to its periodization, is it appropriate for historians to keep on using the term LIA? In this article, I will argue that it is. However, this requires us to have some basic knowledge about the materials and methods used to detect past climate variability, as well as an understanding of the dynamical drivers of the LIA. My aim here is not to analyse systematically all available climate reconstruction data in relation to the LIA. Instead, I will explain what requirements reconstructions must meet to detect climatic change with statistical analysis, and then present a couple relevant examples. Neither will I revisit earlier criticisms of the LIA concept [2], as these have been addressed elsewhere by historians and scientists [5, 6].
 
Above all, I hope to reveal why the science of the LIA gives us such a complex image of the period and why this is – in fact – not at all a serious problem for historians.  
 
 
Detecting the onset of the LIA
 
Scientists started to measure weather with instruments only in the 18th and 19th centuries. For periods further back in time, we can reconstruct past climate variability from climate proxies. Proxies are indirect climate data, materials which have been influenced by climate of the time when they were laid or written down, formed, or grown. Proxies are found either in the “archives of nature,” like in tree rings, ice layers or sediment varves, or in the “archives of societies.” For example, administrative sources, chronicles and private diaries may contain descriptions of weather and records of climate-sensitive activities, like harvest onset or river ice break-up dates. The later archive type, the written documents, holds climatic and meteorological information particularly relevant to human history [7].
 
The PAGES (Past Global Changes, an international association that bridges paleoclimatologists across the globe) 2k Consortium reconstructions indicate that mean temperatures in the Arctic, Europe and Asia were colder in the 13th and 14th centuries CE than in the previous centuries [8]. These multidecadal cooler anomalies may mark the possible onset period of the LIA (Figure 1a). Likewise, numerous individual climate proxy records show in increasing numbers signs of cooling temperatures from the 1300s and 1400s onwards across the Northern Hemisphere (Figure 1b). Thus, if we aim to identify LIA cooling statistically in individual temperature reconstruction data, our time-series should reach beyond the 13th century. This wide temporal scale, however, poses a challenge: where can we find data that goes back so far?
Picture
FIGURE 1. a) 30-year-mean standardized (1190–1970) temperatures for the four Northern Hemisphere PAGES 2k regions [8]. b) Northern Hemisphere standardized (1000–1900) centennial temperature proxy anomalies from various type of data material (ice-cores, lake and ocean sediments, speleothems, tree-rings and written documents) [9].

​From written sources, climate variability is commonly reconstructed with one of the two standard methods: with the calibration-verification procedure or with the index approach [10, 11]. Calibration-verification procedure first translates time-series of climate-sensitive information (like harvest dates from written sources) into meteorological units (such as degrees Celsius) by calibrating the historical data to overlapping period of measured meteorological data. This statistical relationship is then verified with an independent period of overlapping meteorological data [10]. The index approach, on the other hand, first analyses historical climatic information qualitatively and then gives an ordinal-scale value for each datapoint, e.g. from -3 indicating extremely cold to +3 extremely warm [10]. The former method is better suited to detect so-called low-frequency variability (slow and gradual changes like the LIA), whereas the latter identifies primarily year-to-year variability (see Figure 2). For Europe, for example, there is no documentary-based temperature series which would cover the whole last millennium and which would be reconstructed with the calibration-verification procedure. This is because the climate data from European antique and medieval sources are too diverse and inconsistent to use this method to transform the materials into reconstructions [10].
 
Therefore, the reconstructions that extend prior 1500 CE are usually compiled with the index method. Unfortunately, climate reconstructions made from documentary data with this method are not well-suited to detect the LIA change, as these are known to poorly indicate low-frequency variability [10]. At first sight, one exception to the absence of calibration-verification reconstructions extending further back in time appears to be the Netherlands winter and summer temperature reconstruction by A. F. V. van Engelen and others [12]. However, these reconstructions also pose a challenge if we wish to statistically identify change happening over the 13th or 14th centuries because even visual inspection of the series reveal that the early section of the reconstruction appears to be based on ordinal indices. For example, the summer temperature series vary between only seven values before the 14th century (Figure 2). Even these reconstructions are therefore not well-suited to reveal the onset of the LIA, as there is a danger that change detectable with statistical analysis would indicate the change in the reconstruction materials and methods used, not the change in temperatures.
Picture
FIGURE 2. Netherlands December–February (left) and June–August (right) temperature reconstructions [12]. Note the change in the reconstructed values taking place over c. 14th century. The earlier part of the reconstruction resembles visually an index-based reconstruction and the latter calibration-verification one.

​Because of the lack of adequate document-based climate reconstructions covering the whole past millennium, the onset of the LIA needs to be detected from reconstructions compiled using natural proxy data. Yet, these reconstructions give markedly varying periodisation for the LIA. This is partly because different climate proxies, such as tree-ring width or isotopic concentrations in ice-core data, have different “response windows.” This means that different data from different locations are sensitive to different meteorological parameters (such as temperature or precipitation) over different timescales (from months to decades). Raw proxy data therefore portray a rather heterogeneous image of temperature variability over the past 1000 years (Figure 1b). Furthermore, non-climatic factors also influence the measured proxies, and sometimes it is difficult to distinguish heterogeneity that registers climate and weather from heterogeneity that simply reflects noise. Because proxy data are sensitive to past climate variability only in certain locations over certain seasons, and with imperfect accuracy, we have just a partial view of what has happened to climate prior to systematic meteorological measurements began.
 
However, the differing response windows of different proxy data do not entirely explain our heterogeneous image of the LIA, as reconstructions made from same material type can be in interannual to multidecadal disagreement over past temperature variability. For example, when detecting change from mean warm season temperatures in northern Eurasia, three tree-ring density-based reconstructions suggest different dates for the onset of the LIA (Figure 3).
Picture
FIGURE 3. Changepoint analysis for three northern Eurasian tree-ring density-based warm season temperature reconstructions [13]. a) Changepoints in mean for three warm season temperature reconstructions in 1000-2000 CE detected with a Binary Segmentation method with a minimum segment length of 30 years [14]. b) The same trends are detectable in the posterior means (black lines) when the data are analysed with Bayesian changepoint procedure [15]. c) The reconstructions’ approximate sampling sites (triangles) and spatial domains (shading – the areas where the correlation coefficients (Pearson’s r) between the reconstructions and measured warm season temperatures are > 0.5 over the period 1850–2000) [16].

​Still, most temperature reconstructions from the Northern Hemisphere identify a significant change to cooler mean temperatures, as in the examples presented in Figure 3. All indicate that the change to cooler climate took place well before the 1500s. Thus, the lack of detectable LIA change in previous analyses [3] may simply reflect the relative brevity of the time-series used.

The main trigger of the LIA?
 
The heterogenous spatiotemporal characteristics of the LIA can be primarily explained by the probable trigger of the climatic phase: volcanic aerosol forcing [17]. Large volcanic eruptions can inject sulphur compounds far into the stratosphere, where they are oxidized and become aerosols that absorb and scatter incoming solar radiation. The stratosphere warms but less radiation reaches the ground, cooling Earth’s surface. Many paleoscientists attribute multi-decadal temperature variability prior to anthropogenic warming – including the onset of the coldest phases of the LIA - to volcanic aerosol forcing [17, 18].
 
One common criticism of the LIA as a concept is that current climate change is not comparable to the LIA. Indeed, the LIA and anthropogenic warming are not comparable climatic phases in terms of the extent or magnitude of the change, partly due to the different drivers of these changes. Whereas the LIA was to a great degree triggered by volcanic forcing, the current climatic change is caused by rising atmospheric greenhouse gas concentrations (Figure 4). The climatic impact of these two forcing mechanisms differs considerably, so that while the onset of the LIA varies between reconstructions, the shift to a warmer climate is identifiable in a rather small temporal window (1900s–1920s, see Figure 3a) [19]. Furthermore, whereas long-term variations in temperature differ considerably prior to industrialization, all climate reconstructions presented here indicate similar long-term trends over the past 150 years (Figure 4).
Picture
FIGURE 4. Global volcanic aerosol forcing [20], mean of global marine CO2 concentrations [21] and long-term (smoothed using 50-year spline) warm season temperature variability from the reconstructions presented in Figure 3 [13] and instrumental temperature measurements between 1860–2018 [22].

Moreover, volcanic aerosol forcing has both direct and indirect impacts on Earth’s climate. Its direct impacts refer to the mechanisms described above. Its indirect impacts are still not fully understood, but these likely contribute to the spatiotemporal heterogeneity of its climatic impacts. First, the eruption-related cooling is not globally uniform as, for example, oceans cool slower than land. Second, feedback mechanisms can prolong and intensify eruption-related cooling in some areas.

One such feedback mechanism is called ice-albedo feedback. Whereas open waters and dark forests absorb a lot of solar radiation and so warm Earth’s surface, white ice and snow reflect far more incoming solar radiation back into space, cooling the surface. During periods of increased volcanic forcing, the extent and duration of Earth's ice and snow area likely enlarged and prolonged because the cooler conditions, which further lowered temperatures. These feedbacks probably sustained locally and regionally cool surface temperatures long after the impact of volcanic aerosol forcing during the LIA [17].

Third, volcanic forcing alters stratospheric circulation as the direct radiative impacts enhance the Earth’s north-south temperature gradient. This can, for example, strengthen westerly winds. Consequently, northernmost Europe can experience winter warming following strong tropical volcanic eruptions, as westerly winds bring temperate and moist marine air masses to the area [23]. Lastly, some of the indirect effects might influence the main modes of climate variability, such as the summer monsoon circulation, El Niño-Southern Oscillation or the Atlantic Multidecadal Oscillation, which can further increase the spatiotemporal differences of the climate effects of volcanic aerosol forcing [24].
 
In summary, because the climate effects of volcanic forcing are spatially and temporally variable, the onset and the coldest phases of the LIA do not have a readily discernible global pattern. Furthermore, because of its indirect dynamical effects volcanic aerosol forcing can differently influence summer and winter season temperatures. This explains why reconstructions which have varying response windows and sensitivity to climate variability might identify the LIA period rather differently.
 
Due to heterogeneity of the LIA, there was no “typical” LIA climate. Historians should therefore study climate during the period using regional reconstructions relevant to their study area and research questions. In some regions, the LIA was characterised by cool temperatures, in other places by increased year-to-year weather variability. However, climate is, by definition, the long-term average of weather. Commonly, the reference period is at least 30 years. Thus, calculating multidecadal central tendencies from reconstruction data (as well as paying attention to variance and minimum and maximum values) is necessary when defining past climatic regimes. Extreme years cannot be excluded from these calculations. Consequently, drawing attention to possible outliers (i.e. extraordinary cold years) does not lead us to exaggerate the coldness of average LIA conditions, although this has been suggested by some historians. Short-term climate anomalies are part of the prevailing climate.
 
 
The LIA without Periodization
 
This article argues that it is impossible to define universal dates for the onset, termination, characteristics, or coldest phases of the LIA. Instead, these seem to vary depending the region and season in question. But is this a problem for us historians?
 
All historians are familiar with troublesome definitions for historical “periods” – with problematic “periodizations.” For instance, it is impossible to set a common date for the onset of the early modern period. Yet the term is widely accepted and used. Historians are comfortable defining the period depending on the region in question: for some regions a dynastic change marks the onset, whereas the Protestant Reformation or the discovery of Americas might be more appropriate for others. Moreover, the main characteristics of the early modern period vary. In some places the period is marked with a decline of serfdom, for example, whereas in others with an increase. We should accept that, similarly, the periodization and characteristics of the LIA varied from region to region.  
 
The climate variability of the Earth over the past millennium is still an unsolved puzzle. Paleoclimate research resembles other fields that investigate the past, such as the discipline of history. The science of the LIA is not – nor never will be – “done”. Just as each new historical study widens our understanding on the human past, each new reconstruction, simulation and paleoclimate research paper widens our understanding of past climatic changes. There are still many open questions considering the role of volcanic forcing in triggering and sustaining the LIA. Moreover, the possible effects of solar forcing need to be further explored. Therefore, it is essential that historians who are interested in past climates follow the progress of paleoclimatology [25].
 
The LIA sets certain boundaries in late medieval or early modern history. Needless to say, weather events (which usually are the triggers of human calamities) may or may not be typical for prevailing climatic conditions. For example, in the case of famines in LIA Europe, LIA conditions shortened the length and cooled down the temperatures of the growing season. This made the agricultural production and related food systems more vulnerable to disturbances than they were during warmer climatic phases, especially on the northern margins of agriculture. These disturbances can include weather events like frosts destroying the harvest, violent conflicts damaging grain yields, or entitlement failures due to hoarding of food stuff. Thus, the LIA, or any past period in history, cannot be imagined as the sole cause of past events, like famines. Instead, it gives an environmental context to these events in history.
 
Of course, the term “Little Ice Age” is not perfect. It is, for example, not correct in a geophysical sense, as it took place during an interglacial period. However, the term is short, handy, and already widely used. It is practical in science communication, such as interdisciplinary research, teaching, and public outreach. As long as we remember the heterogenous characteristics of the period, by acknowledging the Little Ice Age and exploring its possible entanglements with the human past, we might get further insights into complex climate-society relationships. These can provide perspectives on both sides of the relationship: the agency of climate in human history and the agency of humans in adapting to changing climate and coping with extreme events. Such insights have arguably never been more important than at present, in the context of ongoing anthropogenic climate change and its rising toll on our societies.
 
 
References and notes:
​
[1] Matthes, F. E. (1939). Report of committee on glaciers, April 1939. Eos, Transactions American Geophysical Union, 20(4), 518-523.
[2] Kelly, M., & Ó Gráda, C. (2013). The waning of the Little Ice Age: climate change in early modern Europe. Journal of Interdisciplinary History, 44(3), 301-325.
[3] Kelly, M., & Gráda, C. Ó. (2014). Change points and temporal dependence in reconstructions of annual temperature: did Europe experience a little Ice Age? The Annals of Applied Statistics, 8(3), 1372-1394.
[4] Neukom, R., Steiger, N., Gómez-Navarro, J. J., Wang, J., & Werner, J. P. (2019). No evidence for globally coherent warm and cold periods over the preindustrial Common Era. Nature, 571(7766), 550-554.
[5] White, S. (2013). The real little ice age. Journal of Interdisciplinary History, 44(3), 327-352.
[6] Büntgen, U., & Hellmann, L. (2013). The Little Ice Age in scientific perspective: cold spells and caveats. Journal of Interdisciplinary History, 44(3), 353-368.
[7] Brönnimann, S., Pfister, C. & White, S. (2018). Archives of Nature and Archives of Societies. In White, S., Pfister, C., & Mauelshagen, F. (eds.), The Palgrave Handbook of Climate History. Palgrave Macmillan, London, 27-36.
[8] The PAGES (Past Global Changes) Consortium (2013). Continental-scale temperature variability during the past two millennia. Nature Geoscience, 6(5), 339-346.
[9] Charpentier Ljungqvist, F., Krusic, P. J., Brattström, G., & Sundqvist, H. S. (2012). Northern Hemisphere temperature patterns in the last 12 centuries. Climate of the Past, 8, 227-249. Note that some of the data locations on the map has been moved slightly to illustrate all the proxy material, also the overlapping ones.
[10] Pfister, C., Camenisch, C., & Dobrovolný, P. (2018). Analysis and Interpretation: Temperature and Precipitation Indices. In White, S., Pfister, C., & Mauelshagen, F. (eds.), The Palgrave Handbook of Climate History. Palgrave Macmillan, London, 115-129.
[11] Dobrovolný, P. (2018). Analysis and Interpretation: Calibration-Verification. In White, S., Pfister, C., & Mauelshagen, F. (eds.), The Palgrave Handbook of Climate History. Palgrave Macmillan, London, 107-113.
[12] Van Engelen, A. F., Buisman, J., & IJnsen, F. (2001). A millennium of weather, winds and water in the low countries. In History and Climate. Springer, Boston, MA, 101-124. The time-series downloaded from the KNLM Climate Explorer.
[13] Matskovsky, V. V., & Helama, S. (2014). Testing long-term summer temperature reconstruction based on maximum density chronologies obtained by reanalysis of tree-ring data sets from northernmost Sweden and Finland. Climate of the Past, 10(4), 1473-1487; Helama, S., Vartiainen, M., Holopainen, J., Mäkelä, H. M., Kolström, T., & Meriläinen, J. (2014). A palaeotemperature record for the Finnish Lakeland based on microdensitometric variations in tree rings. Geochronometria, 41(3), 265-277; Schneider, L., Smerdon, J. E., Büntgen, U., Wilson, R. J., Myglan, V. S., Kirdyanov, A. V., & Esper, J. (2015). Revising midlatitude summer temperatures back to AD 600 based on a wood density network. Geophysical Research Letters, 42(11), 4556-4562. The Polar Urals reconstruction data has been downloaded from a dataset provided by Wilson, R., Anchukaitis, K., Briffa, K. R., Büntgen, U., Cook, E., D'arrigo, R., ... & Hegerl, G. (2016). Last millennium northern hemisphere summer temperatures from tree rings: Part I: The long term context. Quaternary Science Reviews, 134, 1-18.
[14] Analysis was performed with the R changepoint package (2.2.2) as implemented by Killick, R., & Eckley, I. (2014). changepoint: An R package for changepoint analysis. Journal of Statistical Software, 58(3), 1-19.
[15] Analysis was performed with the R bcp package (4.0.3) as implemented by Erdman, C., & Emerson, J. W. (2007). bcp: an R package for performing a Bayesian analysis of change point problems. Journal of Statistical Software, 23(3), 1-13.
[16] Warm season: June–August for the Northern Fennoscandia and Polar Urals, and April–September for the South Finland reconstructions. Correlation analysis performed with the Climate Explorer using the HadCRUT4/HadSST4 filled-in T2m/SST field data (Cowtan, K., & Way, R. G. (2014). Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Quarterly Journal of the Royal Meteorological Society, 140(683), 1935-1944).
[17] Miller, G. H., Geirsdóttir, Á., Zhong, Y., Larsen, D. J., Otto‐Bliesner, B. L., Holland, M. M., ... & Anderson, C. (2012). Abrupt onset of the Little Ice Age triggered by volcanism and sustained by sea‐ice/ocean feedbacks. Geophysical Research Letters, 39(2).
[18] Neukom, R., Barboza, L. A., Erb, M. P., Shi, F., Emile-Geay, J., Evans, M. N., ... & Schurer, A. (2019). Consistent multi-decadal variability in global temperature reconstructions and simulations over the Common Era. Nature geoscience, 12(8), 643-649.
[19] In addition, perhaps the early (pre-1950s) warming might be attributable also to some degree to the lack of volcanic aerosol forcing. Note also that depending on the reconstruction materials and methods, plausibly also better proxy sample replication or overlapping instrumental data may contribute to the similarity of temperature trends over the past 150 years.
[20] Sigl, M., Winstrup, M., McConnell, J. R., Welten, K. C., Plunkett, G., Ludlow, F., ... & Fischer, H. (2015). Timing and climate forcing of volcanic eruptions for the past 2,500 years. Nature, 523(7562), 543.
[21] Ballantyne, A. P., Alden, C. B., Miller, J. B., Tans, P. P., & White, J. W. C. (2012). Increase in observed net carbon dioxide uptake by land and oceans during the past 50 years. Nature, 488(7409), 70. Time-series downloaded from the KNLM Climate Explorer.
[22] CRUTEM (4.6) T2m anomalies averaged over 20-70E and 60-70N, https://www.metoffice.gov.uk/hadobs/crutem4/; Time-series downloaded from the KNLM Climate Explorer.
[23] Fischer, E. M., Luterbacher, J., Zorita, E., Tett, S. F. B., Casty, C., & Wanner, H. (2007). European climate response to tropical volcanic eruptions over the last half millennium. Geophysical research letters, 34(5).
[24] Good summary of the climatic effects of volcanic aerosol forcing can be found, for example, in Brönnimann, S. (2015). Climatic changes since 1700, Springer, Cham (especially pp. 123–135).
[25] Consequently, it is noteworthy to mention that although the LIA cooling is currently attributed to volcanic forcing, our understanding for the drivers of the period may advance in the future. Likewise, the Southern Hemispheric extent of the cool period may improve with new proxy data available.
 
Special thanks to Dagomar Degroot and Fredrik Charpentier Ljungqvist for the inspiring discussions and comments which helped shape this text.

The Legal Structure of the Paris Agreement – Flexible and Fit or Fragile and Fading?

3/26/2020

 
Prof. Lisa Benjamin, Lewis & Clark College of Law
Picture
State parties (light purple), signatories (yellow), and parties covered by EU ratification (dark purple) in the Paris Agreement. L.tak, Wikipedia.

​As we enter the 2020s, the backdrop of the climate crisis remains grim. Recent scientific reports tell us that global greenhouse gas emissions are increasing, not declining as they need to. Emissions increased by 1.5% in 2017, 2% in 2018 and are anticipated to increase by 0.6% in 2019.[i]  While emissions are decreasing too slowly in industrialized countries, they are increasing all too quickly in large developing countries. Despite this increase, per capita emissions in the United States and Europe remain 5-20-fold higher than in China and India,[ii] and a significant number of emissions in developing countries are attributable to the manufacture and production of goods consumed in developed countries.

The window to achieve the Paris Agreement’s goals of limiting mean global temperature increases to well below 2°C above pre-industrial averages, and the aspirational goal of a 1.5°C limited increase, is closing quickly. Scientists are increasingly warning of the catastrophic threats of climate change.[iii] Lack of progress by the world’s governments places vulnerable countries such as small island developing states, as well as vulnerable communities within developing and developed countries, at increased risk.[iv] As the United States recently submitted its official notice to withdraw from the Paris Agreement, questions have been raised about the structural integrity of the agreement, and whether it can withstand these countervailing pressures.
Picture
Historical carbon emissions per capita. Map by Vinny Burgoo, data from World Resources Institute.

The Paris Agreement is an internationally legally binding treaty, but its provisions contain sophisticated and varying levels of ‘bindingness’. In terms of emissions reductions, parties have legally binding procedural obligations to submit nationally determined contributions (or NDCs) every five years, but the level of emissions contained within those NDCs are “nationally determined” – i.e. up to each party to individually decide based on their national circumstances. In this sense, the Paris Agreement contains binding obligations of conduct, but not of result. The treaty was intentionally structured with this type of “bottom up”, flexible architecture as a result of the previous history of climate treaties.

The UNFCCC, agreed to in 1992, has almost universal participation (including, for now, the United States). As a framework, umbrella treaty it has few specific binding provisions. The ultimate objective of the Treaty (and any related agreements, which would include the Paris Agreement) is to achieve stabilization of greenhouse gas concentrations at a level that would avoid dangerous anthropogenic interference with the climate system.[v] The definition of ‘dangerous interference’ was never fully articulated, and so the global temperature goals in the Paris Agreement of well below 2°C and 1.5°C flesh out this provision.

The objectives of the UNFCCC were to be achieved on the basis of equity, in accordance with common but differentiated responsibilities, and the respective capabilities of the parties. Developed country parties were to take the lead in combating climate change.[vi] These principles were based upon the historical responsibility of developed countries for greenhouse gas emissions, and the acknowledgement that developing countries had fewer financial, technological, and human resources to tackle climate change, as well as high levels of poverty.

With these principles in mind, in 1997 the Parties to the UNFCCC agreed the Kyoto Protocol in order to provide for more specific, binding emission reduction targets.  Only developed country parties took on binding emissions reduction targets, which were housed in Annex I of the Protocol. They became known as ‘Annex I parties’ as a result. The Protocol had automatically applicable consequences for breach of the targets, with a robust compliance mechanism. The United States never ratified the Protocol, and a number of parties such as Japan and Canada either left the Protocol or have not ratified its second commitment period for a number of reasons.

As emissions from developing countries rose significantly over the years, and existing parties such as Canada were not on target to reach their Protocol commitments, it was clear that a replacement to the Protocol was needed.  The Paris Agreement’s flexible approach to NDCs was considered to be the most politically feasible global solution – it would attract almost universal participation from states (including developing states) and provide for cycles by which parties could increase the ambition of their emissions reductions over time.

Nationally determined contributions are submitted every five years. The initial NDCs were submitted in 2015 and the next round of NDCs are due in 2020. In 2015, it was understood that cumulatively these initial NDCs would not be sufficient to achieve the temperature goals of the Paris Agreement, which is why a system of progressive and cyclical submissions was anticipated. Current NDCs submitted by parties have both conditional and non-conditional targets. For example, a number of developing countries made their emission reduction targets in their NDCs conditional upon receiving adequate levels of climate finance and technical and capacity building. Assuming all initial NDCs were fully implemented, Carbon Action Tracker estimated that global temperature increases would lie between 2.7-3.5°C – far beyond what the Paris Agreement aims for.

There are a number of provisions in the Paris Agreement that anticipate increased ambition in countries’ activities. Article 4 states that NDCs should represent progress over time, and should represent its highest possible ambition. Parties can also adjust their NDC to enhance their level of ambition. Together, these provisions create a strong normative expectation that parties would violate the spirit of the Paris Agreement if they downgraded the ambition in their NDCs.[vii]

But as there are no binding obligations of result in the Paris Agreement (only of conduct), the compliance mechanism in Article 15 has a very narrow mandate – it is designed to be facilitative, transparent, non-punitive, and non-adversarial. The mechanism does not focus on individual country results unless significant and persistent inconsistencies of information are found, and then only with the consent of the party concerned.[viii] The mechanism does have a systemic review function, and provides for consequences such as engaging in a dialogue with parties, providing them with assistance or recommending an action plan.

There are other procedural provisions in the Paris Agreement to keep parties on track, such as an enhanced transparency mechanism to ensure that parties’ processes and targets are transparent.[ix] The transparency provisions are closely associated with the NDCs. For example, under the transparency provisions each party must identify indicators it has selected to track progress towards the implementation and achievement of its NDC.[x]

A Global Stocktake also takes place every five years, interspersed between the NDC cycles. The first Global Stocktake will take place in 2023, and will assess the 2020 NDC cycle. The 2028 Global Stocktake will assess the 2025 NDC cycle, and so on. The 24th COP in Katowice provided more detailed rules on what the Global Stocktake will look like. In the future, it will be a combination of a series of synthesis reports, and a number of high-level technical dialogues examining the output of these reports.

The second round of NDCs are due in 2020, but countries are already falling behind their existing NDC targets. The activities of some countries, such as Russia, Saudi Arabia and the United States, are ‘critically insufficient’, meaning they would lead to a 4°C+ world.[xi] Activities of the UK, EU, Canada, China and Australia are ‘insufficient’, leading to a 3°C+ world. A few parties such as Costa Rica, India, Ethiopia and The Philippines are on track, with 2°C compatible emissions trajectories.  Due to increasing global emissions, emissions reductions will have to be steep in order to stay within the Paris Agreement’s global temperature goals. It is anticipated that emissions reductions of approximately 8% per year are required by every country until the end of the century.

According to the withdrawal provisions of the Paris Agreement, the withdrawal of the United States will take effect on 4th November 2020, one day after the US Presidential elections. Should the government change, the United States could re-enter the Paris Agreement easily and relatively quickly. If it does not, the lack of global leadership on climate change by the US could become a drag on other parties’ climate actions and ambitions in the next decade. The 2020s will be a testing time for the Paris Agreement, and as Noah Sachs states, may lead to the Agreement’s breakdown (parties fall short of their commitments) or worse, its breakup as parties withdraw and the agreement collapsing.[1]

The Paris Agreement was designed as a flexible yet durable global agreement for the coming decades, so it has no specified end date. It allows for and incentivizes increased ambition by its parties, yet does not legally require ambition as there was no political will at the time for such an agreement. The provisions of the Paris Agreement remain relevant, but always relied on domestic political will to be effective.

​As climate protests around the world gain pace, and extreme weather events continue to escalate in frequency and severity in the developed and developing world, the significant question remains whether that political will be forthcoming in a sufficient and timely enough fashion to avoid catastrophic climate change.

Lisa Benjamin is an Assistant Professor at Lewis & Clark College of Law and a member of the UNFCCC Compliance Committee (Facilitative Branch). Her views do not necessarily reflect those of the UNFCCC Compliance Committee.

[1] Noah Sachs, “The Paris Agreement in the 2020s: Breakdown or Breakup?” forthcoming, Environmental Law Quarterly 46(1) (2019).
​

[i] RB Jackson et al., “Persistent fossil fuel growth threatens the Paris Agreement and Planetary Health,” Environmental Research Letters , Vol 14(12) (2019): 1-7.

[ii] Ibid.

[iii] William J. Ripple et al., “World Scientists Warning of a Climate Emergency” BioScience, (2019): 1-5.

[iv] Lisa Benjamin, Sara Seck and Meinhard Doelle, “Climate Change, Poverty and Human Rights: An Emergency Without Precedent” The Conversation (4th September 2019) https://theconversation.com/climate-change-poverty-and-human-rights-an-emergency-without-precedent-120396.

[v] Article 2, UNFCCC, UNTS 1771.

[vi] Ibid, Article 3(1).

[vii] Lavanya Rajamani and Jutta Brunnée, “The Legality of Downgrading Nationally Determined Contributions under the Paris Agreement: Lessons from the US Disengagement” Journal of Environmental Law Vol 29 (2017): 537-551.

[viii] FCCC/PA/CMA/2018/3/Add.2, Decision 20/CMA.1, paras 2 and 22(b).

[ix] Paris Agreement, Article 13.

[x] FCCC/CP/2018/L.23 para 65.

[xi] See Carbon Action Tracker at https://climateactiontracker.org/countries/.

Are Surveyors' Maps and Journals an Untapped Source for Climate Scientists?

12/3/2019

 
Harriet Mercer, University of Oxford
Picture
A section of a map of New South Wales based off the work of Oxley and King and showing how vegetation marks were written on to the maps. Joseph Cross, "Map of New South Wales, Embellished with Views of the Harbour of Port Jackson" (London: J. Cross, 1826) from the collections of the State Library of New South Wales.

In early nineteenth century Australia, surveyors were given a near impossible task by colonial authorities. They were asked to use their expeditions to ascertain "the general nature of the climate, as to heat, cold, moisture, winds, rains, periodical seasons." [1] Meeting this directive was difficult because most surveyors did not spend more than a few days or weeks in a location that was otherwise unfamiliar to them. In these situations, precision instruments such as thermometers and barometers were of limited utility as they could only offer a reading at the time of visitation and not for any other time of year or for any other year when atmospheric conditions might be quite different.
 
Plants provided one solution to the dilemma.
 
In the opening decades of the nineteenth century, Aboriginal Australians helped surveyors to decipher some of the relationships between plants and atmosphere in Australia. This article shows how surveyors were taught that the distribution of certain tree species could help them understand the way rainfall patterns varied over space. It also shows how surveyors learnt to use the watermarks on trees and debris lodged in branches as indications of the way rainfall patterns varied over time. With the help of Aboriginal Australians, surveyors were practicing a sort of historical climatology. They were attempting to understand past atmospheric changes in a given area using indirect sources. Can the evidence contained in surveyors’ journals and maps help the historical climatologists and climate scientists of today? This article argues that they can.
 
When Thomas Mitchell, the surveyor-general of New South Wales, was tasked with charting the two major inland river systems of Australia’s south east, his Wiradjuri guides helped him understand the association of particular plants with particular atmospheric and hydraulic conditions. Mitchell was taught that the yarra tree (known today as the river red-gum or eucalyptus camaldulensis) indicated the presence of a river or lake. Mitchell scanned the horizon for these tall trees when he wanted to locate a place of permanent waters:
The yarra is certainly a pleasing object in various respects; its shining bark and lofty height inform the traveller of a distant probability of water […] and being visible over all other trees it usually marks the course of riverd so well that, in travelling along the Darling and Lachlan [rivers], I could with ease trace the general course of the river without approaching its banks. [2]
​Mitchell’s Wiradjuri guides showed him that the presence of goborro trees, by contrast, indicated a place of transient waters. Mitchell learnt that the goborro tree (most likely the plant known today as black box or Eucalyptus largiflorens) flourished on plains subject to temporary inundations rather than on the banks of more permanent waters. “These peculiarities,” Mitchell wrote, “we ascertained only after examining many a hopeless hollow where the goborro grew by itself; nor [sic] until I had found my sable guides eagerly scanning the yarra from afar when in search of water, and condemning any distant view of goborro trees as hopeless during that dry season.” [3] But the presence of particular trees did more than give surveyors an indication of how rain fell and flowed over space in inland Australia.
 
Flood marks on trees such as the yarra also gave surveyors an indication of how much rainfall an area had received in the past. Some of Mitchell’s travels around Australia’s inland river systems were during the drier than average year of 1835, which meant that he did not see how the country looked when it was well-watered. [4] But Mitchell often used the water stains on river red gums and other trees to make judgements about the rainfall variability of an area. Near the inland Lachlan River, for example, he noticed “a tract extending southward from the river for about three miles, on which grew yarra trees bearing the marks of occasional floods to the height of a foot above the common surface.” [5]
Picture
An illustration from Mitchell’s account of his 1836 expedition showing a ‘flood-branch of the Murray, with the scenery common on its bank’ including the yarra or river red gum tree. Thomas Mitchell, "Three Expeditions into the Interior of Eastern Australia; with descriptions of the recently explored region of Australia Felix, and of the present colony of New South Wales, Volume 2."

Other surveyors working in Australia such as John Oxley and Charles Sturt also used the flood marks left on trees to understand the rainfall variability that a region could experience. Travelling through inland New South Wales northwest of Sydney in 1818, Oxley did not think that the Castlereagh River had been flooded for a long period because there were “no marks of wreck or rubbish on the trees or banks” of the river. [6] Cedar trees along the Hastings River further east, by contrast, bore “marks of flood exceeding twenty feet, but confined to the bed of the river.” [7] Travelling southwest of Sydney about ten years later, Charles Sturt reported “marks of recent flood on the trees, to the height of seven feet.” [8]
 
Surveyors’ atmospheric observations were not, moreover, confined to their journals. They were also written on their maps of Australia. Sometimes, this data was indirectly transcribed and would only be decipherable when read together with the surveyors’ written accounts. Take the example of an 1826 map of New South Wales, which was based on the work of two surveyors. In addition to the names of settlements, rivers and mountains, the map had descriptions of prevailing plant life written on to its surface. [9] The presence of plants such as “box” (or goborro as Mitchell called it) on the map indicated that an area was subject to flood as this was the plant that liked to wet its feet in temporary inundations.
 
More often, the atmospheric information plants provided was directly recorded on to surveyors’ maps. In 1822, for instance, Oxley produced a map which included the Lachlan River, a waterway he had visited five years earlier in 1817. By the sides of the River, Oxley wrote over the map “Low Marshy Country devoid of Hills and occasionally overflowed, perhaps to the extent of 30 miles on each side of the River.” Near other rivers on the map, Oxley also added earth-atmosphere descriptions such as “marks of the flood 30 feet above the present surface of the River’ and ‘Marks of the rise of the flood about 16 feet.” [10] Surveyors’ maps were, then, not just topographical representations of the country – they were also atmospheric representations.
 
The atmospheric information contained in these maps and journals has been overlooked by historians and historical climatologists. This is in part because the maps and journals were created in a period when the use of precision instruments such as thermometers was becoming increasingly widespread and when methods for observing these instruments were being standardised. But as this article (and my doctorate research) shows, instruments were not always the preferred newcomer method for accessing information about the atmosphere. Whereas instruments offered surveyors isolated snapshots of the atmosphere, plants revealed annual and interannual trends. Like historical climatologists and paleoclimatologists today, nineteenth century surveyors yearned to know the atmospheric history of a place.
 
This Australian case study suggests that there are at least three reasons why survey maps and journals deserve the attention of contemporary climate scientists. First, these sources could help historical climatologists overcome the bias toward reconstructing past temperatures over other atmospheric variables. “Most of the climate reconstructions over say the last 1000 years,” Brázdil et al. argued in 2005, "focus on temperature." [11] More recently, the editors of the 2018 Palgrave Handbook of Climate History have argued that precipitation is a "field of research calling for more effort by climate historians." [12] One of the reasons that past precipitation patterns have received less attention than past temperature patterns is because the latter tend to be better represented in the sources. Survey journals and maps can help address this bias in the sources.
 
Second, these sources not only offer information on an under-represented atmospheric variable. They also offer information on under-represented regions. Often, when nineteenth century rain-gauge measurements were taken, they were recorded in urban centers and port cities. This is a problem because precipitation rates and patterns are “highly localized” phenomena. [13] The amount of rainfall one area received could be different for an area just a few kilometers away. Attaining more geographically dispersed precipitation information is therefore crucial to producing more accurate and geographically diverse reconstructions. Surveyors’ records offer information about precipitation patterns for areas where no rain-gauge records were kept in nineteenth-century Australia.
Picture
River red gums growing by the Murray River. Elizabeth Donoghue/Flickr, CC BY-NC-SA.

​Finally, survey maps and journals could provide scientists with information about how particular plants are responding to the earth’s changing atmosphere. The river red gums that are frequently mentioned in Mitchell’s journals have been the subject of recent research into the water needs of flood-plain trees. These trees, researchers have shown, are crucial to the health of flood-plain eco-systems: ‘Everything relies on the red gum to maintain health’. [14] Yet while it is well known that river red gums have adapted strategies for surviving long periods of drought, exactly how long these trees can go without water is less certain. Dr. Tanya Doody’s research, for example, indicates that in conditions of below average rainfall, the trees should not go more than seven years without being flooded. [15] The records of surveyors could offer an additional source of data for such important research projects.
 
In the case of Australia, survey maps and journals are accessible sources. The journals referred to in this article are all digitized and available online without the need for payment or institutional affiliation. Public institutions such as the National Library of Australia and the State Library of New South Wales have also digitized numerous survey maps in high resolution, which allows researchers to zoom in on the atmospheric details that surveyors marked on their charts. Recognizing the valuable atmospheric data contained in these historical sources could prompt other institutions to digitize more survey journals and maps. Such a project promises to help climate scientists in their quest to reconstruct past climates in order to better understand future atmospheric changes and the effects of those changes on plant life and river systems. It also promises to illuminate the way past efforts to understand the atmospheric patterns in Australia were sometimes joint newcomer-Indigenous endeavors. 

Harriet Mercer is a PhD candidate in the Centre for Global History at the University of Oxford. She is writing a history of climate knowledge production in Australia in the nineteenth century, and exploring how the Anthropocene is changing the way historians write and research history.

[1] See for example John Oxley, Journals of two expeditions into the interior of New South Wales undertaken by order of the British Government in the years 1817 – 1818, http://setis.library.usyd.edu.au/ozlit/pdf/p00066.pdf; Phillip Parker King, Narrative of a Survey of the Intertropical and Western Coasts of Australia Performed Between the Years 1818 and 1822, Volume 1, http://gutenberg.net.au/ebooks/e00027.html; Charles Sturt, Two Expeditions into the Interior of Southern Australia, during the years 1828, 1829, 1830, and 1831, http://www.gutenberg.org/files/4330/4330-h/4330-h.htm-
​

[2] Thomas Mitchell, Three Expeditions into the Interior of Eastern Australia; with descriptions of the recently explored region of Australia Felix, and of the present colony of New South Wales, Volume 2, http://gutenberg.net.au/ebooks/e00036.html.

[3] Mitchell, Three Expeditions, Volume 2, http://gutenberg.net.au/ebooks/e00036.html.

[4] Linden Ashcroft, Joëlle Gergis and David John Karoly, “A historical climate dataset for southeastern Australia, 1788 – 1859,” Geoscience Data vol. 1, no. 2 (2014) p. 172.

[5] Mitchell, Three Expeditions, Volume 2, http://gutenberg.net.au/ebooks/e00036.html.

[6] Oxley, Journals of two expeditions, http://setis.library.usyd.edu.au/ozlit/pdf/p00066.pdf.

[7] Oxley, Journals of two expeditions, http://setis.library.usyd.edu.au/ozlit/pdf/p00066.pdf.

[8] Sturt, Two Expeditions, http://www.gutenberg.org/files/4330/4330-h/4330-h.htm.

[9] Joseph Cross, Map of New South Wales, Embellished with View of the Harbour of Port Jackson (London: J. Cross, 1826).

[10] John Oxley, A Chart of the Interior of New South Wales (London: A. Arrowsmith, 1822).

[11] Rudolf Bradzil et al. “Historical Climatology in Europe the State of the Art,” Climate Change vol. 3 (2012), pp. 386 – 387.

[12] Christian Pfister et al. “General Introduction: Weather, Climate, and Human History,” in The Palgrave Handbook of Climate History eds. Christian Pfister, Sam White and Franz Mauelshagen (London: Palgrave Macmillan, 2018), p. 12.

[13] Pfister et al. “General Introduction: Weather, Climate, and Human History,” p. 12.

[14] Mary O’Callaghan, “The water needs of floodplain trees – the inside view,” ECOS, 9 April 2018, https://ecos.csiro.au/flood-plain-river-red-gums.

[15] O’Callaghan, “The water needs of floodplain trees”.

Remembering Disaster: How Qing Dynasty Records Reveal Connections Between Memory and Environment

8/4/2019

 
Meghan Michel, Georgetown University
Picture
“Ten Thousand Miles Along the Yellow River,” Unidentified Chinese artist, datable to 1690-1722.

Memory is one of the most powerful parts of the human psyche. It can help us make sense of the world around us, but it can also cloud our vision. Over long timeframes, memory can actually become embedded into a culture in ways that are difficult to comprehend. One of the complex effects of this sort of cultural memory may be the development of a culture of resilience in the face of extreme climatic events. Long-term environmental memories might help a community respond to otherwise destructive climate changes in a way that allows them to survive, recover, and sometimes even thrive. Given the increasing impact of extreme weather events due to anthropogenic climate change, it is uniquely important right now to understand how memory can contribute to creating a more resilient culture.

In order to make sense of the connections between memory, culture, and climate, this article considers how memory might have played a role in the interpretation of extreme weather events in Qing dynasty China (1644–1912). A look at Qing dynasty records suggests that long-term environmental memory could indeed help cultivate a culture of resiliency, while at the same time skewing how accurate the perception of a disaster is. More than anything, this look into the potential roles of memory provides a new perspective with which to think about our current interpretation of environmental changes and cultural capacity for adaptation.

Researchers generally agree that we actively use our memory when processing our surroundings, including weather.[1] Imagine you live in a town that has faced extreme flooding in recent years. When you experience a heavy rain, you will probably remember the floods. However, the exact way that we use memory to interpret the environment may not be so simple as providing the background with which we think about weather or climate change. In her work on biological control programs, scholar Karen Middleton describes a feedback loop by which present events reshape how the past is remembered at the same time as the interpretation of past events shapes the understanding of present conditions.[2] So just as memories of a disastrous flood would affect how villagers interpret a heavy rain in their present, if they are not currently experiencing flooding it could influence their memories of that past disaster. In our hypothetical water-logged village, perhaps the current rains are locally just as strong as the rains during a past flood. Yet because those rains are not accompanied by a flood, villagers might mistakenly remember the first flood as having more extreme rains.

There is also some evidence that memory can contribute to long-term integration of climate awareness into human societies. Archaeologist Toby Pillat has offered the idea that daily interactions with weather can create cultural norms that dictate how future people might interact with their climate.[3] The cultural norms that Pillat refers to may even develop into a culture of resiliency, as described by environmental historian Adam Sundberg in his work on the infamous Christmas Flood of 1717. Sundberg describes a process of “disaster-induced learning,” in which repeated incidents of disaster lead to long-term “cultures of coping,” a gradualist theory that he reports as currently trending in the field of disaster history.[4]

Occasionally, this sort of cultural learning about climate may take the form of a clearer understanding of weather events over time. For example, environmental historian Dagomar Degroot found that “the human consequences of the Little Ice Age…prompted Dutch citizens to think comparatively and accurately about weather across long timeframes.”[5] Yet long-term environmental learning may also lead to a skewed perception of climatic events due to increased societal preparedness. In his study of lower Austrian floods in 1572-3, scholar Christian Rohr makes the argument that “due to the preparedness of the population, most of the floods were not perceived as disasters.”[6] While in some cases, memory might lead to more accurate interpretations of climate, in others its concrete effects in establishing adaptive cultures may have the opposite impact – which in turn could influence the long-term understanding of climate in a society.
Picture
The North China Plain includes the lower Yellow River and its tributaries. Alanmak Alan Mak, Wikipedia.

So how might memory have played a role in Qing dynasty cultural understandings of, and responses to, climate change? To answer this question, we will look at the North China Plain (NCP). The NCP is defined here as the modern-day provinces of Beijing, Tianjin, Hebei, Shandong, Henan, Anhui, and Jiangsu. Such a wide definition of the area is useful in that it allows for more data, and therefore clearer identification of patterns; it is reasonable, in that precipitation extremes across the NCP are fairly standard.
 
The Qing dynasty was not only a time of distinct dynastic change in China, but it was also a part of an era of occasionally global climate change that is often termed the Little Ice Age (LIA). The LIA is generally understood to be a period of cooling that in many places reached its coldest phase between the fifteenth and eighteenth centuries.[7] While there was a broad global trend of cooling, the LIA had different effects across different areas over variable time scales. In China, the LIA can be split into three stages: a cool, dry early period, a warm, dry middle period, and a wet, cold late period.[8] These changes are thought to be related to changes in both East Asian Summer Monsoons (EASM), and the El Niño-Southern Oscillation (ENSO) cycle. [9] The NCP in particular experiences highly variable precipitation patterns due to the strong effect of the EASM in the region. [10] Across the Qing dynasty, the region suffered many environmental disasters, including extreme floods and droughts.

During the Qing dynasty, gazetteers called fangzhi 方志 were systematically written every day. Official chronicles were mandated by the government; many more local gazetteers were kept by various scholars or officials, and organized by government historians over the course of the Qing dynasty. These primary source texts contain detailed information about the weather. Because they were written systematically, whether or not a chronicle mentions a disaster, like the floods and droughts we will examine here, is more likely to be a result of the recorder’s interpretation of the weather than, for example, the recorder’s desire or freedom to write. Therefore, a comparison of how often a disaster was documented with the actual occurrence of extreme events can be used as a way to guess at the role of memory and culture in the interpretation of weather. The recently published Reconstructed East Asian Climate Historical Encoded Series (REACHES) database uses the fangzhi to create data points that represent weather events, allowing us to build a timeline for how often a drought or flood disaster was recorded throughout the Qing dynasty.

In order to create a timeline for when droughts and floods actually occurred during the Qing dynasty in the NCP, we can look at the reconstruction of extreme weather events from the work of a team of Chinese geographers led by Zheng Jingyun. Zheng and his co-authors have used a wide range of primary text sources, then verified the information in them with weather measurements compiled using scientific instruments. They also used statistical analysis to account for the fact that the number of textual sources increases over time, due to the higher likelihood that more recent records could be preserved. While it would be ideal to have more scientific data to create our timeline, such as information from tree rings or pollen records, the reality is that most climate history for this region and time relies on historic archives, probably due to the abundance of primary source texts from China.

The methods used by Zheng and his team make their results some of the best currently available, and most useful for this comparison. They look specifically at extreme droughts and floods, which they define as periods lasting more than three years that have an amount of rain more than one and half times greater or lower than about the average precipitation level. The fact that they look at both severity and time scale means that when they note the occurrence of disaster, there should theoretically be fangzhi records for that disaster from across the NCP.

Building these parallel timelines of documentation and occurrence for both droughts and floods reveals curious correlations and discrepancies. First, the highest numbers of fangzhi records of flood and drought do not always correlate with the frequency and severity of disasters. For example, two of the most extreme events identified by Zheng and his co-authors in the NCP were a decade-long drought that took place from 1634 to 1644, and another calamitous drought from 1719 to 1723.[11] Yet there are several points in our timeline of documentation where the amount of drought records is higher than in those years of especially extreme drought. While the frequency of fangzhi flood documentation lines up a little bit better with the occurrence of extreme floods, there are still moments where we see a higher amount of flood records in years when extreme flooding did not occur.

Disaster records therefore often did not grow more common during, or even immediately after, many of the worst disasters. Usually the quantity of fangzhi records only surged over a decade after a disaster. In fact, years immediately following extreme weather events generally had fewer records of disaster. These patterns may suggest that when a disaster was ongoing, it was harder to keep records like local fangzhi as people struggled to survive and rebuild. They could also indicate that in the short term, the memory of a disaster helped give a more realistic sense of what constituted that sort of event, and led to more accurate reporting. However, over time, this accuracy of memory may have faded, and instead cultural memory could have encouraged a heightened sensitivity towards recording disaster.
Left: Reconstruction of frequency of fangzhi drought records frequency from 1645-1795, overlaid in orange with extreme drought events from Zheng et al. in 1634 – 1644 and 1719 – 1723. Right: Reconstruction of frequency of fangzhi flood records from 1646-1806, overlaid in orange with extreme flood events from Zheng et al. in 1650-60, 1730, and 1750-60.

Looking more broadly, it seems that there is overall more documentation of disaster earlier in Qing history. For both droughts and floods, moving forward in time there is a moderate trend from more to fewer fangzhi records of either type of extreme event. Moreover, the amount of variation in disaster documentation frequency from year to year appears to be higher earlier in history. There also seems to be a slight trend over time towards more stability in the amount of records of both floods and droughts.

These wide patterns of decreasing records and increasing stability of documentation frequency may suggest more or less long-term accuracy in memory of disaster severity, depending on whether you emphasize periods of extreme events or of non-disaster. This might also point to the development of a culture of resiliency. If the people of the NCP were better equipped to deal with these types of events over time, they might not be so inclined to interpret the weather as a disaster, even in cases when it was as extreme as in previous decades.

It is clear that there is no simple relationship between memory and environmental disasters. It should also be noted that these potential patterns and connections are based on a simple visual comparison of timelines. However, the lack of precision here speaks to the fact that this area holds much potential for future research. In particular, Chinese climate history scholarship thus far has often focused on using statistical significance testing to propose correlative relationships between environmental changes and societal reactions. [12] This sort of quantitative research is an ideal starting point for exploring more nuanced aspects of environmental history. It allows for engagement with unique scholarship, like the ideas about memory explored here, and in this case, offers diverse new perspectives on the possible links between culture and environmental extremes.

This study reveals that memory could be a powerful part of how we respond to environmental changes. At the same time as memory may help develop an adaptive culture, accurate judgement of the seriousness of a disaster might be clouded. As we enter an era of increased extreme weather events, it is important to consider whether our interpretations of disaster are accurate. Moreover, while long-term memory has potentially contributed to climate change resilience in our cultures, our current adaptations may not be enough to deal with the disasters of the future. We might also examine how the gradual nature of the current rise in disaster severity may influence both our cultural adaptations and misunderstandings of memory. As the consequences of anthropogenic climate change become more pronounced, it is crucial that we look to the past for new ideas about the relationship between culture and environment, including the complex effects of memory.

[1] See Toby Pillatt, “Experiencing Climate: Finding Weather in Eighteenth Century Cumbria,” Journal of Archaeological Method and Theory 19:4 (December 2012): 564–81; A. Hall, and G. Endfield, 2016: “Snow Scenes: Exploring the Role of Memory and Place in Commemorating Extreme Winters,” Wea. Climate Soc., 8:5–19; or Dagomar Degroot, The Frigid Golden Age: Climate Change, the Little Ice Age, and the Dutch Republic, 1560–1720 (New York: Cambridge University Press, 2018), 262.

[2] Karen Middleton, "Renarrating a Biological Invasion: Historical Memory, Local Communities and Ecologists," Environment and History 18:1 (2012): 61-95.

[3] Pillatt, “Experiencing Climate.”

[4] Adam Sundberg, “Claiming the Past: History, Memory, and Innovation Following the Christmas Flood of 1717,” Environmental History 20:2 (April 2015): 238–61. 

[5] Degroot, The Frigid Golden Age, 262.

[6] Christian Rohr, "Floods of the Upper Danube River and Its Tributaries and Their Impact on Urban Economies (c. 1350-1600): The Examples of the Towns of Krems/Stein and Wels (Austria)," Environment and History 19:2 (2013): 148. 

[7] Dagomar Degroot, “Climate Change and Society from the Fifteenth Through the Eighteenth Centuries,” WIREs Climate Change Advanced Review, 2018, doi:10.1002/wcc.518, 1.

[8] Anning Cui, Chunmei Ma, Lin Zhao, Lingyu Tang, and Yulian Jia, Pollen Records of the Little Ice Age Humidity Flip in the Middle Yangtze River Catchment, Vol. 193 2018, doi://doi.org/10.1016/j.quascirev.2018.06.015.

[9] Cui et al., Pollen Records. 

[10] J. Zheng, W. C. Wang, Q. Ge, Z. Man, and P. Zhang, Precipitation variability and extreme events in eastern China during the past 1500 years Terr. Atmos. Ocean. Sci., 17, 2006, 580.

[11] Zheng et al., Precipitation variability and extreme events, 588.

[12] This understanding of the broader field is mostly informed by Dr. Dagomar Degroot. For a good example of this sort of statistical correlative work, see David Zhang, David D., Harry F. Lee, Cong Wang, Baosheng Li, Qing Pei, Jane Zhang, and Yulun An. “The causality analysis of climate change and large-scale human crisis.” Proceedings of the National Academy of Sciences (2011): 201104268.

Works Cited:
 
Andrew Salvador Mathews. "Suppressing Fire and Memory: Environmental Degradation and Political Restoration in the Sierra Juárez of Oaxaca, 1887-2001." Environmental History  8:1 (2003): 77-108.
 
Brook, Timothy. The Troubled Empire: China in the Yuan and Ming Dynasties. Cambridge, USA: Harvard University Press, 2010.
 
Cui, Anning, Chunmei Ma, Lin Zhao, Lingyu Tang, and Yulian Jia. "Pollen Records of the Little Ice Age Humidity Flip in the Middle Yangtze River Catchment." Quaternary Science Reviews 193 (2018): 43-53.     
 
Degroot, Dagomar. “Climate Change and Society from the Fifteenth Through the Eighteenth       Centuries.” WIREs Climate Change Advanced Review, 2018.
 
Degroot, Dagomar. The Frigid Golden Age: Climate Change, the Little Ice Age, and the Dutch  Republic, 1560–1720. New York: Cambridge University Press, 2018.    
 
Fang, XiuQi, Xiao, LingBo, and Wei, ZhuDeng. “Social Impacts of the Climatic Shift Around   the Turn of the 19th Century on the North China Plain.” Science China Earth Sciences 56:6 (2013): 1044–58.
 
Forgas, Joseph P., Liz Goldenberg, and Christian Unkelbach. 2009. Can Bad Weather Improve        Your Memory? an Unobtrusive Field Study of Natural Mood Effects on Real-Life    Memory. Vol. 45.
 
Guy, R. Kent. Qing Governors and Their Provinces: the Evolution of Territorial Administration in China, 1644-1796. Seattle: University of Washington Press, 2010.
 
Hall, A. and G. Endfield, “Snow Scenes”: Exploring the Role of Memory and Place in    Commemorating Extreme Winters. Wea. Climate Soc. 8 (2016): 5–19.
 
Hao, Zhixin, Yingzhuo Yu, Quansheng Ge, and Jingyun Zheng. “Reconstruction of High            resolution Climate Data over China from Rainfall and Snowfall Records in the Qing Dynasty.” WIREs: Climate Change 9 (3) (2018): e517.
 
Koselleck, Reinhart. Futures Past: On the Semantics of Historical Time. Cambridge, Mass: MIT      Press, 1985.
 
Kwiatkowski, Teresa, and Alan Holland. "Dark Is the World to Thee: A Historical Perspective      on Environmental Forewarnings." Environment and History 16:4 (2010): 455-82.        
 
Li, S., He, F. & Zhang, X., "A spatially explicit reconstruction of cropland cover in China from 1661 to 1996." Reg Environ Change 16:2 (2016): 417-428.
 
Middleton, Karen. "Renarrating a Biological Invasion: Historical Memory, Local Communities    and Ecologists." Environment and History 18:1 (2012): 61-95.        
 
Pillatt, Toby. “Experiencing Climate: Finding Weather in Eighteenth Century Cumbria.” Journal      of Archaeological Method and Theory 19:4 (December 2012): 564–81.        
 
Rohr, Christian. "Floods of the Upper Danube River and Its Tributaries and Their Impact on        Urban Economies (c. 1350-1600): The Examples of the Towns of Krems/Stein and Wels            (Austria)." Environment and History 19:2 (2013): 133-48.        
 
Rohr, Christian. "Man and Natural Disaster in the Late Middle Ages: The Earthquake in            Carinthia and Northern Italy on 25 January 1348 and Its Perception." Environment and            History 9:2 (2003): 127-49.
 
Shuoben Bi, Shengjie Bi, Changchun Chen, Athanase Nkunzimana, Yanping Li, and Weiting           Wu. “Spatial Characteristics Analysis of Drought Disasters in North China during         the Ming and Qing Dynasties.” Natural Hazards & Earth System Sciences Discussions 2016, 1–13. 
 
Sundberg, Adam. “Claiming the Past: History, Memory, and Innovation Following the  Christmas Flood of 1717.” Environmental History 20:2 (April 2015): 238–61.       
 
Wang, P. K. et al. Construction of the REACHES climate database based on historical         documents of China. Sci. Data. 5:180288.
 
Zheng, J., W. C. Wang, Q. Ge, Z. Man, and P. Zhang, 2006: Precipitation variability and extreme    events in eastern China during the past 1500 years. Terr. Atmos. Ocean. Sci., 17, 579-  592.

Moving Targets: Bird Migration, Climate Change, and History

7/1/2019

 
Prof. Anya Zilberstein, Concordia University.
Picture
"Swarm of birds" by Per Gosche, licensed under CC BY 2.0.

On May 11, birders in 130 countries took part in World Migratory Bird Day (WMBD). This citizen science and conservation project enrolls professional and amateur ornithologists twice a year—in May and October—to record sightings of hundreds of species during their spring and fall passages through continental and hemispheric regions, or so-called flyways, across Africa, Europe, Asia, the Americas, and Australasia.
 
WMBD was first organized in 2006 only in part as a way to recruit people to collect more data points. Its purpose is also to convey the urgency of the climate crisis for migratory animals by publicizing how profoundly avian populations, the geography of their flyways, and the timing of their movements have been affected by a host of modern ecological pressures resulting from human activity, including climate change. (Every year WMBD focuses on a different threat, which, for the second annual event in 2007, was “Migratory Birds in a Changing Climate.”) [1]
 
What’s sometimes lost in such initiatives, however, is their considerable history. As I’m discovering in one of my current research projects called “Flight Paths for Birds and other Migrants,” which focuses on theories of migration developed in the 17th and 18th centuries, many of the questions that inform activities like WMBD—even questions about anthropogenic forces—emerged much, much earlier than is usually acknowledged. Participants in WMBD continue a very long line of bird observers who have generated crowd-sourced data to learn more about the behavior of migratory species. Collectively, this constitutes a global, however uneven, record of avian, insect, and other animal movements amassed over centuries.
 
The ancient reliance on birds for a variety of cultural, spiritual, and utilitarian purposes all over the world, informed indigenous knowledge about where and when some itinerant species would normally arrive in a particular place. The earliest avowedly scientific attempts to grasp the geographical and temporal scale of these cyclical movements—including the possible effect of weather and climate on them—date to at least the late 17th century, when individuals increasingly began to keep records of arrival and departure dates. Most historians of ornithology as an empirical science focus on the Victorian period after the development of evolutionary biology, but pre-Darwinian ideas about avian migration recognized, albeit implicitly and without the statistical precision, that 60% of species are migratory (and that many or most other birds may have been migratory millennia ago).
 
Early naturalists, from Cotton Mather to Gilbert White and John James Audubon, were fascinated by what they called ‘birds of passage,’ especially the capacity to dwell in a range of aquatic, aerial, and terrestrial environments and to regularly relocate between different hemispheres in groups. And their attention to these peculiar traits also spurred them to speculate about migratory birds’ susceptibility to change, including changes in their seasonal habitats caused by human societies, such as colonization, the expansion of farmland, and the growth of towns.
 
Their early documents could provide both temporal depth and unique insights into the natural history of climate and migration, particularly for conservationists interested in adaptations to environmental changes in the 20th and 21st centuries. This is especially so because many aspects of migratory patterns—their origins, geography, seasonality, and changes over time—have remained mysterious to scientists, even now.
Picture
The Arctic Tern, a migratory species from John James Audubon, Birds of America.

When the biologist Frederick Lincoln coined the term flyway in 1935, he meant it to describe migratory birds’ “ancestral routes”—what were then thought to be the typical extent of the vast ranges and diverse environments they inhabited during different times of the year. Specifying the geography of these migratory corridors would, in turn, assist in ambitious banding projects and the creation of wildlife refuges. [2] Since then, increasingly sophisticated surveillance technologies have been developed to understand how much the boundaries of flyways and periodicity of biannual movements may have actually shifted over time, both in the deep evolutionary past as well as in the present.
 
Yet if it’s clear that rising temperatures, coastal erosion, light and air pollution, urbanization, deforestation, and predation affect migratory bird populations, like much else about our knowledge of their biology, it remains unclear exactly how or how permanently ongoing climate change might shift particular species’ range boundaries, breeding or feeding grounds, and arrival or departure dates.
 
To take just one of numerous examples, in 2010 ecologists in the Netherlands relying on 20 years of trend data gathered by “many thousands of volunteer birdwatchers across Europe” found strong evidence predicting that the continuation of earlier spring thaws and a longer season of heat waves on the continent will likely make climate change an underlying cause of population decline among long-distance migrants like pied flycatchers. Yet five years later, another study with similar parameters and common co-authors concluded that such predictions were less reliable than they’d hoped. “Contrary to expectations,” the patterns they observed between 2004 and 2014 were inconsistent with those from the 1984-2004, leading to equivocal results. [3] It’s not only that as research proceeds some findings about the relationship between climate change and avian migration appear contradictory; it is also that, especially as the current climate crisis grows, the dynamic differs significantly by species, region, and time frame.
 
The history of ornithology can’t help to absolutely resolve these variances: the science of animal migration—like all scientific knowledge—will necessarily be susceptible to revision. But studying historical sources could encourage conservationists to recognize that anthropogenic climate change is only the most recent, if perhaps the most drastic, example of how the lives of non-human animals have long been observed and shaped, for better or worse, by people.

Anya Zilberstein is associate professor of history at Concordia University. She is the author of A Temperate Empire: Making Climate Change in Early America (Oxford University Press, 2016), and she is currently working on a new project, "Fodder for Empire: Feeding People Like Other Animals," which examines the history of experiments in producing and distributing non-perishable, high-calorie, low-cost food for animals and people in the British Empire.

Special thanks to Jesse Coady for research assistance.
[1] <http://www.worldmigratorybirdday.org/about/wmbd-themes-since-2006 >

[2] Frederick C. Lincoln, The Waterfowl Flyways of North America (Washington, DC: US Department of Agriculture, 1935), 3.

[3] Christiaan Both, Sandra Bouwhuis, C. M. Lessells, and Marcel E. Visser, “Climate change and population declines in a long-distance migratory bird,” Nature 441, no. 7089 (2006): 81; Christiaan Both, Chris AM Van Turnhout, Rob G. Bijlsma, Henk Siepel, Arco J. Van Strien, and Ruud PB Foppen, “Avian population consequences of climate change are most severe for long-distance migrants in seasonal habitats,” Proceedings of the Royal Society B: Biological Sciences 277, no. 1685 (2010): 1259-1266; Marcel E. Visser, Phillip Gienapp, Arild Husby, Michael Morrisey, Iván de la Hera, Francisco Pulido, Christiaan Both, “Effects of Spring Temperatures on the Strength of Selection on Timing of Reproduction in a Long-Distance Migratory Bird,” PLOS Biology 13, no. 4 (2015), e1002120; https://doi.org/10.1371/journal.pbio.1002120.

Does the United States Need a Climate Refugee Policy?

4/25/2019

 
Prof. María Cristina García, Cornell University.
Picture
Volunteers from Proactiva Open Arms aid refugees arriving from Turkey to Skala Sykamias, Greece.

People displaced by extreme weather events and slower-developing environmental disasters are often called “climate refugees,” a term popularized by journalists and humanitarian advocates over the past decade. The term “refugee,” however, has a very precise meaning in US and international law and that definition limits those who can be admitted as refugees and asylees. Calling someone a “refugee” does not mean that they will be legally recognized as such and offered humanitarian protection.
​
The principal instruments of international refugee law are the 1951 United Nations Convention Relating to the Status of Refugees and its 1967 Protocol, which defined a refugee as:

"any person who owing to well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable, or owing to such fear, is unwilling to avail himself of the protection of that country; or who, not having a nationality and being outside the country of his former habitual residence as a result of such events, is unable or, owing to such fear, is unwilling to return to it." [i]

This definition, on which current U.S. law is based, does not include any reference to the “environment,” “climate,” or “natural disaster,” that might allow consideration of those displaced by extreme weather events and/or climate change.

In some regions of the world, other legal instruments have supplemented the U.N. Refugee Convention and Protocol, and these instruments offer more expansive definitions of refugee status that might offer protections to the environmentally displaced. The Organization of African Unity’s “Convention Governing the Specific Aspects of Refugee Problems in Africa (1969)” includes not only external aggression, occupation, and foreign domination as the motivating factors for seeking refuge, but also “events seriously disturbing the public order.”[ii]  In the Americas, the non-binding Cartagena Declaration on Refugees (1984), crafted in response to the wars in Central America, set regional standards for providing assistance not just for those displaced by civil and political unrest but also those fleeing “circumstances which have seriously disturbed the public order.”[iii] The Organization of American States has also passed a series of resolutions offering member states additional guidance on how to respond to refugees, asylum seekers, stateless persons, and others in need of temporary or permanent protection. In Europe, the European Union Council Directive (2004) has identified the minimum standards for the qualification and status of refugees or those who might need “subsidiary protection.”[iv]

Together, these regional and international conventions, protocols, and guidelines acknowledge that people are displaced for a wide range of reasons and that they deserve respect and compassion and, at the bare minimum, temporary accommodation. Climate change has been absent in these discussions perhaps because environmental disruptions such as hurricanes, earthquakes, and drought were long assumed to be part of the “natural” order of life, unlike war and civil unrest, which are considered extraordinary, man-made, and thus avoidable. The expanding awareness that societies are accelerating climate change to life-threatening levels requires that countries reevaluate the populations they prioritize for assistance, and adjust their immigration, refugee, and asylum policies accordingly.
Picture
Kutupalong Refugee Camp, Myanmar. Photo credit: John Owens.

Under current U.S. immigration law, those displaced by sudden-onset disasters and environmental degradation do not qualify for refugee status or asylum unless they are able to demonstrate that they have also been persecuted on account of race, religion, nationality, membership in a particular social group, or political opinion. This wasn’t always the case: indeed, U.S. refugee policy once recognized that those displaced by “natural calamity” were vulnerable and deserved protection. The 1953 Refugee Relief Act, for example, defined a refugee as “any person in a country or area which is neither Communist nor Communist-dominated, who because of persecution, fear of persecution, natural calamity or military operations is out of his usual place of abode and unable to return thereto… and who is in urgent need of assistance for the essentials of life or for transportation.”[v] The 1965 Immigration Act (Hart-Celler Act) established a visa category for refugees that included persons “uprooted by catastrophic natural calamity as defined by the President who are unable to return to their usual place of abode.” [vi]

Between 1965 and 1980, no refugees were admitted to the United States under the “catastrophic natural calamity” provision but that did not stop legislators from opposing its inclusion in the refugee definition. Some legislators argued that it was inappropriate to offer permanent resettlement to people who were only temporarily displaced; while others took issue on the grounds that it undermined the economic recovery of hard-hit countries by draining them of their most highly-skilled citizens. The 1980 Refugee Act subsequently eliminated any reference to natural calamity or disaster, in line with the United Nation’s definition of refugee status.

In recent decades, scholars, advocates, and policymakers have called for a reevaluation of the refugee definition in order to grant temporary or permanent protection to a wider range of vulnerable populations, including those displaced by environmental conditions. At present, U.S. immigration law offers very few avenues for entry for the so-called “climate refugees”: options are limited to Temporary Protected Status (TPS), Delayed Enforced Departure (DED), and Humanitarian Parole.

The 1990 Immigration Act provided the statutory provision for TPS:  according to the law, those unable to return to their countries of origin because of an ongoing armed conflict, environmental disaster, or “extraordinary and temporary conditions” can, under some conditions, remain and work in the United States until the Attorney General (after 2003, the Secretary of Homeland Security) determines that it is safe to return home. [vii] There is one catch: in order to qualify for TPS one already has to be physically present in the United States—as a tourist, student, business executive, contract worker or even as an unauthorized worker. TPS is granted on a 6, 12, or 18-month basis, renewed by the Department of Homeland Security (DHS) if the qualifying conditions persists. TPS recipients do not qualify for state or federal welfare assistance but they are allowed to live and work in the United States until federal authorities determine that it’s safe to return. In the meantime, they can send much-needed remittances to their families and communities back home to assist in their recovery.

TPS is one way, albeit imperfect, that United States exercises its humanitarian obligations to those displaced by environmental disasters and climate change. It is based on the understanding that countries in crisis require time to recover; if nationals living abroad return in large numbers, in a short period of time, they can have a destabilizing effect that disrupts that recovery. Countries affected by disaster must meet certain conditions in order to qualify: first, the Secretary of Homeland Security must determine that there has been a substantial disruption in living conditions as a result of a natural or environmental disaster, making it impossible for a government to accommodate the return of its nationals; and second, the country affected by environmental disaster must officially petition for its nationals to receive TPS status (a requirement that is not imposed on countries affected by political violence). However, environmental disaster does not automatically guarantee that a country’s nationals will receive temporary protection. The U.S. federal government has total discretion and the decision-making process is not immune to domestic politics.

Deferred Enforced Departure (DED) is another status available to those unable to return to hard-hit areas: DED offers a stay of removal as well as employment authorization, but the status is most often used when TPS has expired. In such circumstances, the president has the discretionary (but rarely used) authority to allow nationals to remain in the United States in the interest of humanitarian or foreign policy, or until Congress can pass a law that offers a permanent accommodation. [viii]

Humanitarian “parole” is yet another recourse for the environmentally displaced. The 1952 McCarran Walter Act granted the attorney general discretionary authority to grant temporary entry to individuals, on a case-by-case basis, if deemed in the national interest. Since 2002, humanitarian parole requests have been handled by the United States Citizenship and Immigration Services (USCIS), and are granted much more sparingly than during the Cold War. USCIS generally grants parole only for one year (renewable on a case-by-case basis). [ix] Parole does not place an individual on a path to permanent residency or citizenship, nor does it make applicants eligible for welfare benefits; only occasionally are “parolees” granted the right to work, allowing them to earn a livelihood and send remittances to communities hard hit by political and environmental disruptions.

TPS, DED, and humanitarian parole are only temporary accommodations for select and small groups of people. They are an inadequate response to the humanitarian crisis that will develop in the decades to come. Scientists forecast that in an era of unmitigated and accelerated climate change, sudden-onset disasters will become fiercer, exacerbating poverty, inequality, and weak governance, and forcing many more people to seek safe haven elsewhere—perhaps in the hundreds of millions over the next half-century.  

In the current political climate, it’s hard to imagine that wealthier nations like the United States will open their doors to even a tiny fraction of these displaced peoples; however, the more economically developed countries must do more to honor their international commitments to provide refuge, especially to those in developing areas who are suffering from environmental conditions they did not create.  In the decades to come, as legislators try to mitigate the effects of climate change and help their populations become resilient, they must also share the burden of a human displacement caused by the failure to act quickly enough.

María Cristina García, an Andrew Carnegie Fellow, is the Howard A. Newman Professor of American Studies in the Department of History at Cornell University.  She is the author of several books on immigration, refugee, and asylum policy.  She is currently completing a book on the environmental roots of refugee migrations in the Americas.

[i] United Nations, “Convention and Protocol Relating to the Status of Refugees,” 14, http://www.unhcr.org/en-us/3b66c2aa10. The 1951 Convention limited the focus of assistance to European refugees in the aftermath of the Second World War.   The 1967 Protocol removed these temporal and geographic restrictions. The United States did not sign the 1951 Convention but it did sign the 1967 Protocol.
[ii] The OAU convention stated that the term refugee should also apply to “every person who, owing to external aggression, occupation, foreign domination or events seriously disturbing the public order in either part or the whole of his country or origin or nationality, is compelled to leave his place of habitual residence in order to seek refuge in another place outside his country of origin or nationality.” Organization of African Unity, Convention Governing the Specific Aspects of Refugee Problems in Africa,”  http://www.unhcr.org/en-us/about-us/background/45dc1a682/oau-convention-governing-specific-aspects-refugee-problems-africa-adopted.html accessed September 15, 2017.
[iii] The Cartagena Declaration stated that “in addition to containing elements of the 1951 Convention…[the definition] includes among refugees, persons who have fled their country because their lives, safety or freedom have been threatened by generalized violence, foreign aggression, internal conflicts, massive violations of human rights or other circumstances which have seriously disturbed the public order.” Cartagena Declaration on Refugees,” http://www.unhcr.org/en-us/about-us/background/45dc19084/cartagena-declaration-refugees-adopted-colloquium-international-protection.html 
[iv] European Union, “Council Directive 2004/83/EC,” April 29, 2004, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32004L0083 accessed March 20, 2018.
[v] Refugee Relief Act of 1953 (P.L. 83-203), https://www.law.cornell.edu/topn/refugee_relief_act_of_1953.
[vi] Immigration and Nationality Act of 1965 (P.L. 89-236), https://www.govinfo.gov/content/pkg/STATUTE-79/pdf/STATUTE-79-Pg911.pdf
[vii] Immigration Act of 1990 (P.L.101-649), https://www.congress.gov/bill/101st-congress/senate-bill/358
[viii] USCIS, “Deferred Enforced Departure,” https://www.uscis.gov/humanitarian/temporary-protected-status/deferred-enforced-departure.
[ix] The humanitarian parole authority was first recognized in the 1952 Immigration Act (more popularly known as the McCarran Walter Act). See http://library.uwb.edu/Static/USimmigration/1952_immigration_and_nationality_act.html. See also “§ Sec. 212.5 Parole of aliens into the United States,” https://www.uscis.gov/ilink/docView/SLB/HTML/SLB/0-0-0-1/0-0-0-11261/0-0-0-15905/0-0-0-16404.html

More: Energy History and Energy Futures

4/10/2019

 
Prof. Sean Kheraj, York University. 

This is the fifth post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, and ActiveHistory.ca.
Picture
If nuclear power is to be used as a stop-gap or transitional technology for the de-carbonization of industrial economies, what comes next? Energy history could offer new ways of imagining different energy futures. Current scholarship, unfortunately, mostly offers linear narratives of growth toward the development of high-energy economies, leaving little room to imagine low-energy futures. As a result, energy historians have rarely presented plausible ideas for low-energy futures and instead dwell on apocalyptic visions of poverty and the loss of precious, ill-defined “standards of living.”

The fossil fuel-based energy systems that wealthy, industrialized nation states developed in the nineteenth and twentieth centuries now threaten the habitability of the Earth for all people. Global warming lies at the heart of the debate over future energy transitions. While Nancy Langston makes a strong case for thinking about the use of nuclear power as a tool for addressing the immediate emergency of carbon pollution of the atmosphere, her arguments left me wondering what energy futures will look like after de-carbonization. Will industrialized economies continue with unconstrained growth in energy consumption, expand reliance on nuclear power, and press forward with new technological innovations to consume even more energy (Thorium reactors? Fusion reactors? Dilithium crystals?)? Or will profligate energy consumers finally lift their heads up from an empty trough and start to think about ways of living with less energy? Unfortunately, energy history has not been helpful in imagining low-energy possibilities.

For the past couple of years, I’ve been getting familiar with the field of energy history and, for the most part, it has been the story of more. [1] Energy history is a related field to environmental history, but also incorporates economic history, the history of capitalism, social history, cultural history and gender history (and probably more than that). My particular interest is in the history of hydrocarbons, but I’ve tried to take a wide view of the field and consider scholarship that examines energy history in deeper historical contexts.

There are several scholars who have written such books that consider the history of human energy use in deep time. For example, in 1982, Rolf Peter Sieferle started his long view of energy history in The Subterranean Forest: Energy Systems and the Industrial Revolution by considering Paleolithic societies. Alfred Crosby’s Children of the Sun: A History of Humanity’s Unappeasable Appetite for Energy (2006) begins its survey of human energy history with the advent of anthropogenic fire and its use in cooking. Vaclav Smil goes back to so-called “pre-history” at the start of Energy and Civilization: A History (2017) to consider the origins of crop cultivation. 

In each of these surveys energy historians track the general trend of growing energy use. While they show some dips in consumption and global regional variation, the story they tell is precisely as Crosby puts it in his subtitle, a tale of humanity’s unappeasable appetite for greater and greater quantities of energy. 

The narrative of energy history in the scholarship is remarkably linear, verging on Malthusian. According to Smil: 

“Civilization’s advances can be seen as a quest for higher energy use required to produce increased food harvests, to mobilize a greater output and variety of materials, to produce more, and more diverse, goods, to enable higher mobility, and to create access to a virtually unlimited amount of information. These accomplishments have resulted in larger populations organized with greater social complexity into nation-states and supranational collectives, and enjoying a higher quality of life.” [2]

Indeed, from a statistical point of view, it’s difficult not to reach the conclusion that humans have proceeded inexorably from one technological innovation to another, finding more ways of wrenching power from the Sun and Earth. The only interruptions along humanity’s path to high-energy civilization were war, famine, economic crisis, and environmental collapse. 

Canada’s relatively short energy history appears to tell a similar story. As Richard W. Unger wrote in The Otter~la loutre recently, “Canadians are among the greatest consumers of energy per person in the world.” And the history of energy consumption in Canada since Confederation shows steady growth and sudden acceleration with the advent of mass hydrocarbon consumption between the 1950s and 1970s. Steve Penfold’s analysis of Canadian liquid petroleum use focuses on this period of extraordinary, nearly uninterrupted growth in energy consumption. Only in 1979 did Canadian petroleum consumption momentarily dip in response to an economic recession. “What could have been an energy reckoning…” Penfold writes, “ultimately confirmed the long history of rising demand.” [2]
​
I’ve seen much of what Penfold finds in my own research on the history of oil pipeline development in Canada. Take, for instance, the Interprovincial pipeline system, Canada’s largest oil delivery system. For much of Canada’s “Great Acceleration” the history of more couldn’t be clearer:
Picture
This view of energy history as the history of more informs some of the conclusions (and predictions) of energy historians. Crosby is, perhaps, the most optimistic about the potential of technological innovation to resolve what he describes as humanity’s unsustainable use of fossil fuels. In Crosby’s view, “the nuclear reactor waits at our elbow like a superb butler.” [4] For the most part, he is dismissive of energy conservation or radical reductions in energy consumption as alternatives to modern energy systems, which he admits are “new, abnormal, and unsustainable.” [5] Instead, he foresees yet another technological revolution as the pathway forward, carrying on with humanity’s seemingly endless growth in energy use.
​
Energy historians, much like historians of the Anthropocene, have a habit of generalizing humanity in their analysis of environmental change. As I wrote last year in The Otter~la loutre, “To understand the history of Canada’s Anthropocene, we must be able to explain who exactly constitutes the “anthropos.”” Energy historians might consider doing the same. The history of human energy use appears to be a story of more when human energy use is considered in an undifferentiated manner. The pace of energy consumption in Canada, for instance, might look different when considering the rich and the poor, settlers and Indigenous people, rural Canadians and urban Canadians. Globally, energy histories around the world tell different stories beyond the history of more including histories of low-energy societies and histories of energy decline. Most global energy histories focus on industrialized societies and say little about developing nations and the persistence of low-energy, subsistence economies.

If Smil is correct and “Indeed, higher energy use by itself does not guarantee anything except greater environmental burdens,” then future decisions about energy use should probably consider lower energy options. [6] Transitioning away from burning fossil fuels by using nuclear power may alleviate the immediate existential crisis of global warming, but confronting the environmental implications of high-energy societies may be the bigger challenge. To address that challenge, we may need to look back at histories of less.

Sean Kheraj is the director of the Network in Canadian History and Environment. He’s an associate professor in the Department of History at York University. His research and teaching focuses on environmental and Canadian history. He is also the host and producer of Nature’s Past, NiCHE’s audio podcast series and he blogs at http://seankheraj.com.

[1] I’m borrowing from Steve Penfold’s pointed summary of the history of gasoline consumption in Canada: “Indeed, at one level of approximation, you could reduce the entire his-tory of Canadian gasoline to a single keyword: more.” See Steve Penfold, “Petroleum Liquids” in Powering Up Canada: A History of Power, Fuel, and Energy from 1600 ed. R. W. Sandwell (Montreal: McGill-Queen’s University Press, 2016), 277.

[2] Vaclav Smil, Energy and Civilization: A History (Cambridge: MIT Press, 2017), 385.

[3] Penfold, “Petroleum Liquids,” 278.

[4] Alfred W. Crosby, Children of the Sun: A History of Humanity’s Unappeasable Appetite for Energy (New York: W.W. Norton, 2006), 126.

[5] Ibid, 164.
​
[6] Smil, Energy and Civilization, 439.

The Nuclear Renaissance in a World of Nuclear Apartheid

3/27/2019

 
Prof. ​Toshihiro Higuchi, Georgetown University. 

This is the fifth post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, and ActiveHistory.ca.
Picture
Nuclear power is back, riding on the growing fears of catastrophic climate change that lurks around the corner. The looming climate crisis has rekindled heated debate over the advantages and disadvantages of nuclear power. However, advocates and opponents alike tend to overlook or downplay a unique risk that sets atomic energy apart from all other energy sources: proliferation of nuclear weapons.
 
Despite the lasting tragedy of the 2011 Fukushima disaster, the elusive goal of nuclear safety, and the stalled progress in radioactive waste disposal, nuclear power has once again captivated the world as a low-carbon energy solution. According to the latest IPCC report, released in October 2018, most of the 89 available pathways to limiting warming to 1.5 oC above pre-industrial levels see a larger role for nuclear power in the future. The median values in global nuclear electricity generation across these scenarios increase from 10.84 to 22.64 exajoule by 2050.
 
The global nuclear industry, after many setbacks in selling its products, has jumped on the renewed interest of the climate policy community in atomic energy. The World Nuclear Association has recently launched an initiative called the Harmony Programme, which has established an ambitious goal of 25% of global electricity supplied by nuclear in 2050. Even some critics agree that nuclear power should be part of a future clean energy mix. The Union of Concerned Scientists, a U.S.-based science advocacy group and proponent of stronger nuclear regulations, recently published an op-ed urging the United States to “[k]eep safely operating nuclear plants running until they can be replaced by other low-carbon technologies.”
 
But the justified focus on energy production vis-à-vis climate change obscures the debate that until recently had defined the nuclear issue: weapons proliferation. It is often said that a global nuclear regulatory regime, grounded on the 1968 Nuclear Non-Proliferation Treaty (NPT) and the International Atomic Energy Agency (IAEA)’s safeguards system, has proven successful as a check against the diversion of fissile materials from peaceful to military uses. There is indeed a good reason for this optimism. Since 1968, only three countries (India, Pakistan, North Korea) have publicly declared possession of nuclear weapons – a far cry from “15 or 20 or 25 nations” that President Kennedy famously predicted would go nuclear by the 1970s.
 
Contrary to the impressions created by the biological metaphor, as political scientist Benoit Pelopidas has pointed out, the “proliferation” of nuclear weapons is also neither inevitable nor irreversible. South Africa, a non-NPT country which had secretly developed nuclear weapons by the 1980s, voluntarily dismantled its arsenal following the end of Apartheid. Belarus, Kazakhstan, and Ukraine, which inherited nuclear warheads following the collapse of the Soviet Union in 1991, also agreed to transfer them to Russia. Moreover, despite all the talk about the threat of nuclear terrorism, experts note a multitude of obstacles, both technical and political, for non-state actors to steal or assemble workable atomic devices.[1] Although many countries and terrorists are known to have harbored nuclear ambition at one point or another in the past – and some undoubtedly still do so today – we should not exaggerate the possibility of nuclear weapons acquisition by new countries and violent non-state actors and the potential threat that it might pose to international security.
 
The real dangers of nuclear proliferation, however, lie elsewhere. The NPT is supposed to be a bargain between the nuclear haves and have-nots. The non-nuclear countries agreed not to acquire or manufacture nuclear weapons in exchange for the pledge made by all parties, including the five nuclear-weapon states designated by the NPT (United States, Soviet Union/Russia, United Kingdom, France, and China), to “pursue negotiations in good faith on effective measures relating to cessation of the nuclear arms race at an early date and to nuclear disarmament.” The nuclear-armed countries, however, have consistently failed to keep their end of the deal.[2]
 
Meanwhile, the United States has repeatedly used or threatened to use military force to disarm hostile countries suspected to have clandestine nuclear weapons programs. Iraq is the most famous example of this, but as discussed below, U.S. officials also seriously considered preemptive attacks against nuclear facilities in China and North Korea. The United States is not alone in its penchant for unilateral military action. Israel, a U.S. ally widely believed to possess nuclear weapons, has also carried out a number of surprise airstrikes that destroyed an Iraqi nuclear reactor in 1981 and a suspected Syrian installation in 2007.[3]
 
The alleged “success” of the repeated use of force and its threat to stem the tide of nuclear proliferation, however, comes at a high cost. Such action may not only deepen the insecurity of the threatened nation and make it all the more determined to develop its nuclear capabilities as a deterrent, but also entails a serious risk of unintended escalation to a large-scale conflict. Anyone who tries to weigh the value of nuclear power in coping with the climate crisis thus must take stock of the history of militarized counter-proliferation policy that reflects and reinforces what historian Shane J. Maddock has called “nuclear apartheid,” a hierarchy of nations grounded on power inequality between the nuclear haves and have-nots.[4]
Picture
Anti-aircraft guns guard Natanz Nuclear Facility, Iran. Photo credit: Hamed Saber.

In October 1964, the People’s Republic of China successfully tested an atomic bomb, becoming the fifth country that demonstrated its nuclear weapons capabilities. The United States eventually acquiesced China’s nuclear status by the time it signed the NPT in 1968, which formally defined the nuclear-weapon state as a country that had manufactured and detonated a nuclear device prior to January 1, 1967.
 
Washington’s decision to tolerate a nuclear China, however, did not come without resistance. In fact, as historian Francis J. Gavin has noted, there is a striking parallelism between the U.S. perception of China during the 1960s and that of a “rogue state” today: China had already clashed with the United States during the Korean War, twice shelled the outlying islands of Taiwan, and invaded India over a disputed border; it strongly disputed the Soviet Union’s leadership in the Communist world and aggressively supported revolutionary forces around the world; and it consolidated a one-party rule and embarked on a series of disastrous political, economic, and social campaigns, most notably the Great Leap Forward and the Cultural Revolution.[5] Operating from the Cold War mindset, and with little information shedding light on the complexity of China’s foreign and domestic policies, senior U.S. officials feared that China’s nuclear weapons program would post a serious threat to the stability of East Asia and the international effort to prevent the further spread of nuclear weapons around the world.[6]
 
It is important to note that not all U.S. officials held such a grim view about China’s nuclear ambition and its consequences. Some believed that a nuclear-armed China would act rather cautiously, and President John F. Kennedy and Lyndon B. Johnson both tried to induce China by diplomatic means to abandon its nuclear program. As historians William Burr and Jeffrey T. Richelson have demonstrated, however, the Kennedy and Johnson administrations also developed contingency plans to disarm China by force. In a memo written in April 1963, the Joint Chiefs of Staff discussed a variety of military options, ranging from covert operations to the use of a tactical nuclear weapon, to coerce China into signing a test ban treaty.[7]
 
While the military was skeptical about the effectiveness of unilateral action and also cautious about the risk of retaliation and escalation, Kennedy and some of his senior advisers remained keen on military and covert operations. For instance, the President showed his interest in enlisting the Republic of China in Taiwan as a proxy to launch a commando raid against Chinese nuclear installations.[8] William Foster, director of the Arms Control and Disarmament Agency, later recalled that Kennedy had been eager to consider the possibility of an airstrike in coordination with, or with tacit approval of, the Soviet Union.[9] The idea of an air raid resurfaced in September 1964, on the eve of the Chinese test. Although Johnson and his advisers ultimately decided against the proposal, all agreed that, “in case of military hostilities,” the United States should consider “the possibility of an appropriate military action against Chinese nuclear facilities.”[10]
 
Despite all the talks about the use of force, the U.S. government ultimately refrained from taking such drastic action. The Soviet Union refused to discuss the possibility of joint military intervention, and the political costs and military risks of an unprovoked attack were too high. The failure to stop China’s nuclear weapons program, Gavin has pointed out, precipitated a major shift in U.S. nuclear policy toward creating a global nonproliferation regime with the NPT as its keystone.[11] However, these “proliferation lessons from the 1960s,” as Gavin has called, did not change the fundamental fact that the United States was willing to contemplate military action, to be carried out unilaterally if necessary, to prevent hostile countries from acquiring nuclear weapons. The NPT became a handy justification for such measures. This was abundantly clear when North Korea triggered another nuclear crisis thirty years later.
Picture
A North Korean ballistic missile at North Korea Victory Day, 2013. Photo credit: Stefan Krasowski.

In March 1993, North Korea startled the world by announcing its decision to withdraw from the NPT. At issue was the IAEA’s demand for special inspections at nuclear facilities in Yongbyon to account for the amount of plutonium generated in an earlier uninspected refueling operation. The tension briefly subsided when, after bilateral talks with the United States, Pyongyang suspended the process of pulling out of the NPT and agreed to allow inspections at a number of installations. In March 1994, however, North Korea suddenly reversed its attitude, blocking IAEA inspectors from conducting activities necessary to complete their mission. The United States responded by declaring its intention to ask the United Nations Security Council to impose economic sanctions against North Korea.  
 
As the confrontation between the United States and North Korea escalated, President Bill Clinton decided to take all necessary measures to coerce Pyongyang into full compliance with the IAEA safeguards. In his memoirs, Clinton wrote that “I was determined to prevent North Korea from developing a nuclear arsenal, even at the risk of war.”[12] To leave no room for misunderstanding about his resolve, Clinton let his senior advisers and military commanders openly discuss contingency plans for military action. On February 6, The New York Times broke news on updated U.S. defense plans for South Korea in the event of a North Korean attack, describing a newly added option for a counteroffensive to seize Pyongyang and overthrow the regime of Kim Il Sung.[13] Meanwhile, Secretary of Defense William Perry talked tough, telling the press that “we would not rule out a preemptive military strike.”[14]
 
The talk about the use of force against North Korea was not an idle threat. In his memoirs, Perry has described contingency planning for military action. In May 1994, when North Korea began to remove the spent fuel rods containing plutonium from its reactor, the Defense Secretary ordered John Shalikhashvili (chairman of the Joint Chiefs of Staff) and Gary Luck (commander of the U.S. military forces in South Korea) to prepare a course of action for “a ‘surgical’ strike by cruise missiles on the reprocessing facility at Yongbyon.”[15] Three former U.S. officials, Joel S. Wit, Daniel B. Poneman, and Richard L. Gallucci, also confirmed that the strike plan was discussed at the highest level of the Clinton administration. On May 19, Perry, Shalikhashvili, and Luck briefed Clinton and his aides on the proposal for an air raid against the Yongbyon facilities, asserting that it would “set the North Korean nuclear program back by years.” Perry, however, reportedly stressed the “downside risk,” namely that “this action would certainly spark a violent reaction, perhaps even a general war.”[16] Clinton recalled that a “sobering estimate of the staggering losses both sides would suffer if war broke out” gave him pause.[17] As Perry has noted, the military option was still “‘on the table’, but very far back on the table.”[18]
 
The self-restraint of the Clinton administration and its commitment to a diplomatic solution have earned praise from many scholars and pundits – in sharp contrast to George W. Bush’s aggressive unilateralism. But the fact remains that Clinton and his aides considered the threat of preventive military action as permissible, even essential, to pressure North Korea into refraining from any suspicious nuclear activities. And their willingness to go to the brink of actual conflict created the tense policy environment that greatly diminished room for quiet diplomacy for a possible compromise while drastically raising the risk of accidents and miscalculations.
 
In this sense, the “peaceful” conclusion of the first North Korean nuclear crisis was a Pyrrhic one, reinforcing the belief widely held by the U.S. policy community that the United States must be prepared to use its military force unilaterally to uphold the global non-proliferation regime. It is thus no coincidence that, even after the disastrous outcomes of the Iraqi War fought in the name of nuclear nonproliferation, the U.S. government still continues to wage the dangerous game of brinkmanship with hostile powers suspected of pursuing the clandestine development of nuclear weapons.

What, then, does the history of U.S. counter-proliferation policy mean for the future use of nuclear power to combat climate change? An answer, I believe, lies in an accelerating shift in the nuclear geography. The New Policies Scenario of the International Energy Agency’s World Energy Outlook 2018, a global energy trend forecast based on policies and targets announced by governments, shows that the demand for nuclear power in 2017-40 will decrease in advanced economies by 60 Mtoe (millions of tons of oil equivalent), whereas it will increase in developing economies by 344 Mtoe. Of approximately 30 countries which are currently considering, planning, or starting nuclear power programs, many are post-colonial and post-socialist countries located in areas, including Central Asia, Eastern Europe, the Middle East, and South and Southeast Asia, where the United States is competing with other major and regional powers for greater influence.

​Added to this geopolitical layer is the nuclear supply game. While many Western conglomerates have recently decided to exit from nuclear exports due to swelling construction costs, Russian and Chinese state-owned companies have aggressively sold nuclear power plants to emerging countries, a move backed by their governments as part of their global strategies. Although Russia and China have generally cooperated with the United States in controlling nuclear exports, the recent U.S. withdrawal from the Iran nuclear deal has pitted Washington against Moscow and Beijing over the latter’s continued negotiations with Iran for nuclear cooperation. Given the growing tension between the United States on the one hand and Russia and China on the other, the expansion of civilian nuclear programs in key strategic regions is likely to be fraught with serious risks of an international crisis and even an armed conflict. 
 
The fundamental solution to the nuclear dilemma in the changing climate is simple: carry out the pledge made by all parties to the NPT, that is, to “pursue negotiations in good faith on effective measures relating to cessation of the nuclear arms at an early date and to nuclear disarmament, and on a treaty on general and complete disarmament under strict and effective international control” (Article VI). A breakthrough toward this goal came in July 2017 when the United Nations General Assembly voted to adopt the first legally binding international agreement that prohibited nuclear weapons. All of the nuclear weapon states and most of their allies, however, refused to participate in the treaty negotiations.
 
Recently, Christopher Ashley Ford, assistant secretary of state for nonproliferation, has called the treaty as a “well-intended mistake,” insisting that a “better way” was to work within the NPT framework while taking steps to improve “the actual geopolitical conditions that countries face in the world.” If the IPCC is correct in its claim that we have only a little more than a decade to stop potentially catastrophic climate change, it is unlikely that the “pragmatic, conditions-focused program” described by Ford will significantly reduce risks of nuclear proliferation and militarized counter-proliferation in time. If so, then we must realize that the promotion of civilian nuclear power in a world of nuclear apartheid – a world in which the United States and its allies are not hesitant to use force to disarm and topple a hostile regime with nuclear ambition – may have no less catastrophic consequences for human society than climate change.

​Toshihiro Higuchi an assistant professor in the Edmund A. Walsh School of Foreign Service at Georgetown University. He is a historian of U.S. foreign relations in the 19th and 20th century. His research interests rest with science and politics in managing the trans-border and global environment.

[1] John Mueller, Atomic Obsession: Nuclear Alarmism from Hiroshima to Al-Qaeda (Oxford; New York: Oxford University Press, 2010), 161-179.

[2] Shane J. Maddock, Nuclear Apartheid: The Quest for American Atomic Supremacy from World War II to the Present (Chapel Hill: University of North Carolina Press, 2010), 1-2.  

[3] Dan Reiter, “Preventive Attacks against Nuclear Programs and the ‘Success’ at Osiraq,” Nonproliferation Review 12, no. 2 (2005): 355-371; Leonard S. Spector and Avner Cohen, “Israel’s Airstrike on Syria’s Reactor: Implications for the Nonproliferation Regime,” Arms Control Today 38, no. 6 (2008): 15-21.

[4] Shane J. Maddock, Nuclear Apartheid: The Quest for American Atomic Supremacy from World War II to the Present (Chapel Hill: University of North Carolina Press, 2010), 1-2.  ​
​
[5] Francis J. Gavin, Nuclear Statecraft: History and Strategy in America’s Atomic Age (Ithaca, NY: Cornell University Press, 2012), 75-76.

[6] William Burr and Jeffrey T. Richelson, “Whether to ‘Strangle the Baby in the Cradle’: The United States and the Chinese Nuclear Program, 1960-64,” International Security 25, no. 3 (2000/01): 55, 61-62. Also see Noam Kochavi, A Conflict Perpetuated: China Policy during the Kennedy Years (Westport, CT: Praeger, 2002).

[7] Ibid., 68-69.

[8] Ibid., 73.

[9] Ibid., 54.

[10] Document 49, Memorandum for the Record, September 15, 1964, in Foreign Relations of the United States, 1964-1968, vol. 30 (Washington: U.S.G.P.O., 1998).

[11] Gavin, 75-103. 

[12] Bill Clinton, My Life (New York: Vintage, 2005), 591.

[13] Michael R. Gordon, “North Korea’s Huge Military Spurs New Strategy in South,” New York Times, February 6, 1994, 1.

[14] Clinton, My Life, 591.

[15] William J. Perry, My Journey at the Nuclear Brink (Stanford, CA: Stanford University Press, 2015), 106. 

[16] Joel S. Wit, Daniel B. Poneman, and Robert L. Gallucci, Going Critical: The First North Korean Nuclear Crisis (Washington: Brookings Institution Press, 2003), 180.

[17] Clinton, My Life, 603.

[18] Perry, My Journey at the Nuclear Brink, 106. 

The Cold War Constraints on the Nuclear Energy Option

3/13/2019

 
Dr. Robynne Mellor.

This is the third post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, and ActiveHistory.ca.
Picture
Mt. Taylor Mine in New Mexico. Author's photograph.

Shortly before uranium miner Gus Frobel died of lung cancer in 1978 he said, “This is reality. If we want energy, coal or uranium, lives will be lost. And I think society wants energy and they will find men willing to go into coal or uranium.”[1]
​

Frobel understood that economists and governments had crunched the numbers. They had calculated how many miners died comparatively in coal and uranium production to produce a given amount of energy. They had rationally worked out that giving up Frobel’s life was worth it.

I have come across these tables in archives. They lay out in columns the number of deaths to expect per megawatt year of energy produced. They weigh the ratios of deaths in uranium mines to those in coal mines. They coolly walk through their methodology in making these conclusions.

These numbers will show you that fewer people died in uranium mines to produce a certain amount of energy. But the numbers do not include the pages and pages I have read of people remembering spouses, parents, siblings, children who died in their 30s, 40s, 50s, and so on. The numbers do not include details of these miners’ hobbies or snippets of their poetry; they don’t reveal the particulars of miners’ slow and painful wasting away. Miners are much easier to read about as death statistics.

The erasure of these people trickles into debates about nuclear energy today. Any argument that highlights the dangers of coal mining but ignores entirely the plight of uranium miners is based on this reasoning. Rationalizations that say coal is more risky are based on the reduction of lives to ratios.

If we are going to make these arguments, we must first acknowledge entirely what we are doing. We must be okay with what Gus Frobel said and meant: that someone is going to have to assume the risk of energy production and we are just choosing whom. We must realize that it is no accident that these Cold War calculations permeate our discourse today, and what that means moving forward.
Picture
Closed Jackpile-Paguate Open Pit Uranium Mine, now a Superfund Site, in New Mexico. Author's photograph.

Promoters of nuclear energy have always tapped into fears about the environment in order to get us to stop worrying and learn to love the power plant. The awesome power of the atom announced itself to the world in a double flash of death and destruction when the United States dropped nuclear bombs on Hiroshima and Nagasaki in August 1945.  Following the end of World War II, growing tensions between the United States and the Soviet Union and the consequent Cold War helped spur on a proliferation of nuclear weapons production. As nuclear technology became more important and sought after, governments around the world fought against nuclear energy’s devastating first impressions, which were difficult to dislodge from the minds of the public. From the earliest days, in order to combat the atom’s fearsome reputation and put a more positive spin on things, policymakers began pushing its potential peaceful applications.
​
Nuclear technology and the environment were intertwined in many complex and mutually reinforcing ways. From as early as the 1940s, as historian Angela Creager has shown, the US Atomic Energy Commission used the potential ecological and biological application of radioisotopes as proof of the atom’s promising, non-militant prospects. By the 1950s, many hailed nuclear power as a way to escape resource constraint, underlining the comparatively small amount of uranium needed to produce the same amount of energy as coal. Using uranium was a way to conserve oil and coal for longer. In the 1960s, as the popular environmental movement grew, nuclear boosters appealed to the public’s concern for the planet by emphasizing the clean-burning qualities of nuclear energy.

Environmentalism spread around the world, with environmental protection slowly being enshrined in law in several different countries. Environmental concern and protection also became an important part of the Cold War battle for hearts and minds. Nuclear advocates successfully appealed to environmentalist sentiments by avoiding certain problems, such as the intractable waste that the nuclear cycle produced, and emphasizing others, namely, the way it did not pollute the air.

The main arguments of Cold War-era nuclear champions live on to this day. For many pro-nuclear environmentalists, who found these arguments appealing, the reasons to support nuclear energy were and continue to be: less uranium is needed than coal to produce the same amount of energy, nuclear energy is clean burning, radiation is “natural” and not something to be feared, and using nuclear energy will give us time to figure out different solutions to the energy crisis, which was once thought of as fossil fuel shortage and now leans more towards global warming.

In broad strokes, then, these arguments are a Cold War holdover, and so are the anachronistic blind spots that accompany them. They portray nuclear power production as a single snapshot of a highly complex cycle. Nuclear is framed as “clean burning” for a reason; the period when it is burning is the only point when it can be considered clean.  This reasoning made more sense when first promulgated because there was a hubris that accompanied nuclear technology, and part of this hubris was to assume that all of the issues that arose due to nuclear technology could and would be solved. Though that confidence is long-gone in general, it still lurks as an assumption that undergirds the argument for nuclear energy.

One of the biggest problems that we were once sure we could solve is nuclear waste disposal.  This problem has not been solved. It becomes more and more complex all the time, and the complexities tied up in the problem continue to multiply. Nuclear waste storage is still a stopgap measure, and most waste is still held on or near the surface in various locations, usually near where it is produced. The best long-term solution is a deep geological repository, but there are no such storage facilities for high-level radioactive waste yet.  Several countries that have tried to build permanent repositories have faced both political and geological obstacles, such as the Yucca Mountain project in the United States, which the government defunded in 2012.

Finland’s Onkalo repository is the most promising site. Many people who pay attention to these issues commend the Finnish government for successfully communicating with, and receiving consent from, the local community. But questions remain about why and how the people alive today can make decisions for people who will live on that land for the next 100,000 years. This timescale opens up various other questions about how to communicate risk through the millennia.  Either way, we will not know if Onkalo is ultimately successful for a really long time, while the kitty litter accident at the Waste Isolation Pilot Plant in New Mexico, USA, where radioactive waste blew up in 2014, hints at how easily things can go wrong and defy careful models of risk.

Promoters continue to use language that clouds this issue. Words such as “storage” and “disposal” obfuscate the inadequacies tied up in these so-called solutions. The truth is, disposal amounts to trying to keep waste from migrating by putting it somewhere and then trying to model the movements of the planet thousands of years into the future to make sure it stays where we put it. It is a catch-22. By ignoring the disposal problem, we kick the same can down the road that was kicked to us.  By developing a disposal system, we just kick it really, really far into the future. Either way, there is an antiquated optimism that still persists in the belief that,one way or another, we will work it out, or have successfully planned for every contingency with our current solutions.

Even if they do so inadequately, advocates of nuclear power often do acknowledge the back-end of the nuclear cycle.  They usually only do so to dismiss it, but at least it is addressed. By contrast, they entirely ignore the front-end of the cycle. This tendency is particularly strange because when uranium is judged against fossil fuels, the ways that coal and oil are extracted enter the conversation while uranium, in contrast, is rarely considered in such terms. We think of coal and oil as things that come from the earth, uranium also is mined and its processing chain is just as complex as the other forms of fuel we seek to replace with it.
Picture
Tailings Management Area in Elliot Lake, Canada. Author's photograph.

Discussions of nuclear energy hardly ever mention uranium mining, possibly because uranium mining increasingly occurs in marginalized landscapes that are out of sight and out of mind (northern Saskatchewan in Canada and Kazakhstan are currently the biggest producers). But even for those who do pay attention to uranium mining, the problems associated with it are officially understood as something we have “figured out.”

The prevailing narrative is that, yes, many uranium miners died from lung cancer linked to their work in uranium mines, and yes, there was a lot of waste produced and then inadequately disposed of due to the pressures and expediencies of the Cold War nuclear arms race. But when officials acknowledged these problems, they implemented regulations and fixed them.

It follows that, because there is no longer a nuclear arms race, and because health and environmental authorities understand and accept the risks associated with mining activities, they have appropriately addressed and mitigated the problems linked to uranium production. Moreover, nuclear power generation, because it is separate from the arms race and the nefarious human radiation experiments that accompanied it, is safer and better for miners and communities that surround mines.

Some aspects of this narrative are true. Uranium miners around the world did labor with few protections through at least the late 1960s, after which conditions improved moderately in some places. Several governments introduced and standardized maximum radon progeny (the decay products of uranium that cause cancer among miners) exposure levels. More mines had ventilation, monitoring increased, and many places banned miners from smoking underground. By the 1970s and 1980s, many countries considered the health problem solved.

The issue with this portrayal is that the effectiveness of the introduction of these regulations is not very clear.  Allowing a few years for the implementation of regulations, most countries did not have mines at regulated exposure levels until at least the mid-1970s. If we then allow for at least a fifteen-year latency period of lung cancer—which is the accepted minimum even with very high exposures—then lung cancer would not begin to show until, at the very least, around the late 1980s or early 1990s.

By this period, however, the uranium-mining industry was collapsing. The Three Mile Island accident in 1979, the Chernobyl accident 1986, and the end of the Cold War arms race meant that plans for nuclear energy stalled and the demand for uranium plummeted. The uranium that did continue to be produced came from new mining regions and new cohorts of workers, or it affected people and places that the public and media ignored, or technology shifted and so fewer people faced the risks of underground uranium mining. There is little information about how and if the risks miners faced changed.

There is also a dearth of information about how these post-regulation miners compare to their pre-regulation counterparts.  One preliminary examinationof Canadian uranium miners, however, shows that miners who began work after 1970 had similar increased risk of mortality from lung cancer as those who began work in earlier decades. This suggests that there was either ineffective radon progeny reduction and erroneous reporting of radon progeny levels in mines or that there is something about the health risks in mines that are not quite understood.

There is another relatively well-known narrative about uranium mining that some commenters point to as something we have figured out and corrected. Due to the extremely effective activism of the Navajo Nation, beginning in the 1970s and continuing through to the present, many people are aware of the hardships Navajo uranium miners faced and, to a lesser degree, the continued legacy of abandoned mines and tailings piles with which they have to contend. High-profile advocates for the Navajo, such as former secretary of the interiorStewart Udall and several journalistic and scholarly books on Navajos and uranium mining, have added to this awareness. Few people realize when pointing to the Navajo case that there is still a lot of confusion surrounding the long-term effects of uranium mining on Navajo land. It is an ongoing problem with unsatisfactory answers.

Moreover, even though Navajo activists were adept at attracting attention to the problems they faced, many other uranium-mining communities cannot, do not want to, or have not been able to garner the same attention.  Uranium mining happened and continues to happen around the world, even though the health risks are poorly understood.  It is changing human bodies and landscapes to this day and affecting thousands of miners and communities. Those who work in mines are still making the trade-off between the employment the mine offers on the one hand, and the higher risk of lung cancer on the other.

The environmental effects of uranium mining also are poorly understood and inadequately managed with a view to the long-term. When mines are in operation, the waste from uranium mills, called tailings, are usually stored in wet ponds or dry piles.  Those who operate uranium mills try to keep these tailings from moving, and there are often government authorities that regulate these efforts, but tailings still seep into water, spread into soil, and migrate through food chains.

These problems relate to mines and mills in operation, but there are also several problems that companies and governments face with regards to mines and mills that are no longer in operation. The production of uranium has led to landscapes with several abandoned mines that are neglected, as well as millions of tons of radioactive and toxic tailings.  There are no good numbers for worldwide uranium tailings, but the International Atomic Energy Agency has estimatedthat the United States alone has produced 220 million tons of mill tailings and 220 million tons of uranium mine wastes.

Waste from uranium production is managed in similar ways around the world.  Using the same euphemistic language employed for nuclear waste coming out of the back-end of the nuclear cycle, tailings from uranium mills are often “disposed.” What disposal usually means is gathering tailings in one area, creating some kind of barrier to prevent erosion—this barrier can be vegetation, water, or rock—and then monitoring the tailings indefinitely to ensure they do not move.

The question that follows is whether or not these tailings are harmful, and the truly unsatisfactory answer is that we do not know. Studies of communities surrounding uranium tailings that consider how tailings affect community health are scarce, while those that do exist are conflicting, inconclusive, and often problematic.  While some studies, with a particular focus on cancer and death, argue that there are no increased illnesses linked to living in former uranium-mining areas, others have connected wastes from uranium production to various ailments, including kidney disease, hypertension, diabetes, and compromised immune system function.

Now, half of all uranium production around the world uses in situ leaching or in situ recovery to extract uranium.  Basically, uranium companies inject an oxidizing agent into an ore body, dissolve the uranium, and then pump the solution out and mill it without first having to mine it. The official line of thinking is that there are negligible environmental impacts stemming from this form of extraction. It certainly reduces risks for miners, but it is unlikely that it does not affect the environment.

The environmentalist argument for nuclear energy, particularly the clean-burning component, is very appealing in a time when our biggest concern is climate change.  Still, nuclear power is a band-aid technofix with many unknowns. The discussion surrounding nuclear energy has never fully grappled with the entire scope of the nuclear cycle, nor has it addressed the unique aspects of production of energy from metals that does not have parallels with fossil fuels. Making an argument about nuclear energy means examining all its risks in comparison with fossil fuels, and then coming to terms with the wealth of unknowns.
​
It also means remembering and keeping in mind the bodies and landscapes making this option possible. To be a nuclear power advocate, especially as an environmentalist, one most also be an advocate for the safety of all nuclear workers. The problems uranium miners and uranium mining communities faced were never fully resolved and they are not fully understood. To promote nuclear power means to pay attention to the people and places that produce uranium and fighting to make sure they receive the protections they deserve for helping us carve our way out of this current problem.

Robynne Mellor received her PhD in environmental history from Georgetown University, and she studies the intersection of the environment and the Cold War. Her research focuses on the environmental history of uranium mining in the United States, Canada, and the Soviet Union. She tweets at @RobynneMellor.

[1] Gus Frobel, quoted in Lloyd Tataryn, Dying for a Living (Deneau and Greenberg Pubishers, 1979), 100.

Only Dramatic Reductions in Energy Use Will Save The World From Climate Catastrophe: A Prophecy

2/27/2019

 
Prof. Andrew Watson, University of Saskatchewan.
This is the third post in a collaborative series titled “Environmental Historians Debate: Can Nuclear Power Solve Climate Change?” hosted by the Network in Canadian History & Environment, the Climate History Network, and ActiveHistory.ca.
Picture

There is no longer any debate. Humanity sits at the precipice of catastrophic climate change caused by anthropogenic greenhouse gas (GHG) emissions. Recent reports from the Intergovernmental Panel on Climate Change (IPCC)[1] and the U.S. Global Change Research Program (USGCRP)[2] provide clear assessments: to limit global warming to 1.5ºC above historic levels, thereby avoiding the most harmful consequences, governments, communities, and individuals around the world must take immediate steps to decarbonize their societies and economies.
 
Change is coming regardless of how we proceed. Doing nothing guarantees large-scale resource conflicts, climate refugee migrations from the global south to the global north, and mass starvation. Dealing with the problem in the future will be exceedingly more difficult, not to mention expensive, than making important changes immediately. The only question is what changes are necessary to address the scale of the problem facing humanity? Do we pursue strategies that allow us to maintain our current standard of living, consuming comparable amounts of (zero-carbon) energy? Or do we accept fundamental changes to humanity’s relationship to energy?
 
In his new book, The Wizard and the Prophet: Two Remarkable Scientists and Their Conflicting Visions of the Future of Our Planet, Charles C. Mann uses the life, work, and ideologies of Norman Borlaug (the Wizard) and William Vogt (the Prophet) to offer two typologies of twentieth century environmental science and thought. Borlaug represents the school of thought that believed technology could solve all of humanity’s environmental problems, which Mann refers to as “techno-optimism.” Vogt, by contrast, represents a fundamentally different attitude that saw only a drastic reduction in consumption as the key to solving environmental problems, which Mann (borrowing from demographer Betsy Hartmann) refers to as “apocalyptic environmentalism.”[3]
 
In the industrialized countries of the world, the techno-optimist approach enjoys the greatest support. Amongst those who think “technology will save us,” decarbonizing the economy means replacing fossil fuel energy with “clean” energy (i.e. energy that does not emit GHGs). Hydropower has nearly reached it global potential, and simply cannot replace fossil fuel energy. Solar, wind, and to some extent geothermal, are rapidly growing technological options for replacing fossil fuel energy. And as this series reveals, some debate exists over whether nuclear can ever play a meaningful role in a twenty-first century energy transition.
 
The quest for new clean energy pathways aims to rid the developed world of the blame for causing climate change without the need to fundamentally change the way of life responsible for climate change. In short, those advocating for clean energy hope to cleanse their moral culpability as much as the planet’s atmosphere. This is the crux of the climate change crisis and the challenge of how to respond to it. It is not a technical problem. It is a moral and ethical problem – the biggest the world has ever faced.
 
The USGCRP’s Fourth National Climate Assessment warns that the risks from climate change “are often highest for those that are already vulnerable, including low-income communities, some communities of color, children, and the elderly.”[4] Similarly, the IPCC’s Global Warming of 1.5ºC report insists that “the worst impacts tend to fall on those least responsible for the problem, within states, between states, and between generations.”[5] Furthermore, the USGCRP points out, “Marginalized populations may also be affected disproportionately by actions to address the underlying causes and impacts of climate change, if they are not implemented under policies that consider existing inequalities.” Indeed, the IPCC reports, “the worst-affected states, groups and individuals are not always well-represented” in the process of developing climate change strategies. The climate crisis has always been about the vulnerabilities created by energy inequalities. Decarbonizing the industrialized and industrializing parts of the world has the potential to avoid making things any worse for the most marginalized segments of the global population, but it wouldn’t necessarily make anything better for them either. At the same time, decarbonization strategies imagine an energy future in which people, communities, and countries with a high standard of living are under no obligation to make any significant sacrifices to their large energy footprints.
 
Over the last thirty years, industrialized countries, such as Germany, the United States, and Canada have consistently consumed considerably more energy per capita than non-industrialized or industrializing countries (Figure 1). In 2016, industrialized countries in North America and Western Europe consumed three to four times as much energy per capita as the global average, while non-industrialized countries consumed considerably less than the average.
Picture
Figure 1: Per capita energy consumption (GJ/person) for select countries, 1990-2016 Source: International Energy Agency (https://www.iea.org/countries/).

Most of the research that has modelled 1.5ºC-consistent energy pathways for the twenty-first century assume that decarbonisation means continuing to use the same amount of, or only slightly less, energy (Figure 2).[6] Most of these models project that solar and wind energy will comprise a major share of the energy budget by 2050 (nuclear, it should be noted, will not). Curiously, the models also project a major role for biofuels as well. Most alarmingly, however, most models assume major use of carbon capture and storage technology, both to divert emissions from biofuels and to actively pull carbon out of the atmosphere (known as carbon dioxide reduction, or negative emissions). The important point here, however, is not the technological composition of these energy pathways, but the continuity of energy consumption over the course of the twenty-first century.
Picture
Figure 2: Recent and projected primary global energy consumption, 1990-2050 Source: International Energy Agency (https://www.iea.org/countries/) and Intergovernmental Panel on Climate Change, Global Warming of 1.5ºC, Chapter 2 (http://www.ipcc.ch/report/sr15/).

In case it is not already clear, I do not think technology will save us. Solar and wind energy technology has the potential to provide an abundance of energy, but it won’t be enough to replace the amount of fossil fuel energy we currently consume, and it certainly won’t happen quickly enough to avoid warming greater than 1.5ºC. Biofuels entail a land cost that in many cases involves competition with agriculture and places potentially unbearable pressure on fresh water resources. Carbon capture and storage assumes that pumping enormous amounts of carbon underground won’t have unintended and unacceptable consequences. Nuclear energy might provide a share of the global energy budget, but according to many models, it will always be a relatively small share. Techno-optimism is a desperate hope that the problem can be solved without fundamental changes to high-energy standards of living.
 
The current 1.5ºC-consistent energy pathways include no meaningful changes in the amount of overall energy consumed in industrialized and industrializing countries. The studies that do incorporate “lifestyle changes” into their models feature efficiencies, such as taking shorter showers, adjusting indoor air temperature, or reducing usage of luxury appliances (e.g. clothes dryers); none of which present a fundamental challenge to a western standard of living.[7] Decarbonization models that replace fossil fuel energy with clean energy reflect a desire to avoid addressing the role of energy inequities in the climate change crisis.
 
 
Climate change is a problem of global inequality, not just carbon emissions. Those of us living in the developed and developing countries of the world would like to pretend that the problem can be solved with technology, and that we would not then need to change our lives all that much. In a decarbonized society, the wizards tell us, our economy could continue to operate with clean energy. But it can’t. Any ideas to the contrary are simply excuses for perpetuating a world of incredible energy inequality. We need to heed the prophets and use dramatically less energy. We need to accept extreme changes to our economy, our standard of living, and our culture.

​Andrew Watson is an assistant professor of environmental history at the University of Saskatchewan.

[1] IPCC, 2018: Global warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty [V. Masson-Delmotte, P. Zhai, H. O. Pörtner, D. Roberts, J. Skea, P.R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J. B. R. Matthews, Y. Chen, X. Zhou, M. I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, T. Waterfield (eds.)]. In Press.

[2] USGCRP, 2018: Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment, Volume II[Reidmiller, D.R., C.W. Avery, D.R. Easterling, K.E. Kunkel, K.L.M. Lewis, T.K. Maycock, and B.C. Stewart (eds.)]. U.S. Global Change Research Program, Washington, DC, USA. doi: 10.7930/NCA4.2018.

[3] Charles C. Mann, The Wizard and the Prophet: Two Remarkable Scientists and Their Conflicting Visions of the Future of Our Planet (Picador, 2018), 5-6.

[4] USGCRP, Fourth National Climate Assessment, Volume II, Chapter 1: Overview.

[5] IPCC, Global warming of 1.5°C, Chapter 1.
​

[6] IPCC, Global warming of 1.5°C; Detlef P. van Vuuren, et al., “Alternative pathways to the 1..5°C target reduce the need for negative emission technologies,” Nature Climate Change, Vol.8 (May 2018): 391-397; Joeri Rogelj, et al., “Scenarios towards limiting global mean temperature increase below 1.5°C,” Nature Climate Change, Vol.8 (April 2018): 325-332.

​
[7] Mariësse A.E. van Sluisveld, et al., “Exploring the implications of lifestyle change in 2°C mitigation scenarios using the IMAGE integrated assessment model,” Technological Forecasting and Social Change, Vol.102 (2016): 309-319.
<<Previous

    ​Archives

    June 2020
    March 2020
    December 2019
    August 2019
    July 2019
    April 2019
    March 2019
    February 2019
    January 2019
    November 2018
    October 2018
    August 2018
    July 2018
    May 2018
    April 2018
    March 2018
    February 2018
    December 2017
    November 2017
    October 2017
    September 2017
    July 2017
    June 2017
    May 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    September 2016
    June 2016
    May 2016
    April 2016
    February 2016
    December 2015
    November 2015
    October 2015
    September 2015
    July 2015
    March 2015
    February 2015
    January 2015
    December 2014
    October 2014
    September 2014
    August 2014
    July 2014
    April 2014
    March 2014
    January 2014
    November 2013
    September 2013
    March 2013
    February 2013
    January 2013
    October 2012
    September 2012
    August 2012
    April 2012
    March 2012
    November 2011
    September 2011
    March 2011
    December 2010
    October 2010

    Categories

    All
    Africa
    Animal History
    Anthropocene
    Arctic
    Australia
    China
    Climate And Conflict
    Climate And Memory
    Climate History
    Climate Migration
    Climate Policy
    Climate Risks
    Climatology
    Columbian Exchange
    Conferences
    Dendrochronology
    Energy
    Environmental History
    Field Work
    Geoengineering
    Glaciology
    Global Warming
    Historical Climatology
    History Of Science
    Interdisciplinary Methodology
    IPCC
    Little Ice Age
    Maps
    Medieval
    Methodology
    Nuclear Power
    Paleoclimatology
    Pedagogy
    Politics Of Climate Change
    Resilience And Adaptation
    Volcanoes
    Weather Modification

    RSS Feed

  • Home
    • Archived Best of the Web
  • Features
    • Archived Features
  • Interviews
    • Climate History Podcast
  • Projects
  • Resources
    • Tools
    • Databases >
      • CLIWOC
    • Bibliography
    • Videos
    • Links
    • Tipping Points
  • Network
    • On Facebook
  • About
    • Our Team
    • Definitions