Prof. David J. Nash, University of Brighton, UK, and University of the Witwatersrand, South Africa
To grasp the significance of global warming, and to confirm its connection to human activity, you have to know how climate has changed in the past. Scholars of past climate change know that understanding how climate has varied over historical timescales requires access to robust long-term datasets. This is not a problem for regions such as Europe and North America, which have a centuries-long tradition of recording meteorological data using weather instruments (thermometers, for example). However, for large areas of the world the ‘instrumental period’ begins, at best, in the late 19th or early 20th century. This includes Africa, where, with the exception of Algeria and South Africa, instrumental data for periods earlier than 1850 are sparse. To overcome such data scarcity, other approaches are used to reconstruct past climates, most notably through analyses of accounts of weather events and their impacts in historical documents.
Compared to the wealth of documentary evidence available for areas such as Europe and China, there are relatively few collections of written materials that allow us to explore the historical climatology of Africa. Documents in Dutch exist from the area around Cape Town that date back to the earliest European settlers in 1652, and Arabic- and Portuguese-language documents from northern and southern Africa, respectively, are likely to include climate perspectives from even further back in time. However, the bulk of written evidence for Africa stems from the late 18th century onwards, with a proliferation of materials for the 19th century following the expansion of European colonial activity.
These documents are increasingly used by historical climatologists to reconstruct sequences of rainfall variability for the African continent. This focus on rainfall isn’t surprising, given that rainfall was – and is – critical for human survival. As a result, people tended to write about its presence or absence in diaries, letters, and reports. In turn, these rainfall reconstructions are now used by historians as a backdrop when exploring climate-society relationships for specific time periods. It is therefore critical that we understand any issues with rainfall reconstructions in case they mislead or misinform.
This article will take you under the hood of the practice of reconstructing past climate change. Its aim is to: (a) provide an overview of historical climatology research in Africa at continental to regional scales; and (b) point out how distinct approaches to rainfall reconstruction in different studies can potentially produce very different rainfall chronologies, even for the same geographical area (which of course alters the kinds of environmental histories that can be written about Africa). The article concludes with some personal reflections on how we might move towards a common approach to rainfall reconstruction for the African continent.
Different approaches to rainfall reconstruction in Africa
Most historical rainfall reconstructions for Africa use evidence from one or more source type (Figure 1). A small number of studies are based exclusively upon early instrumental meteorological data. Of these, some (the continent-wide analysis by Nicholson et al. in 2018, for example) combine rain gauge data published in 19th-century newspapers and reports with more systematically collected precipitation data from the 19th to 21st centuries, to produce quantitative or semi-quantitative time series. Others, such as Hannaford et al. (2015), for southeast Africa, use data digitized from ship logbooks to generate quantitative regional rainfall chronologies.
Most reconstructions, however, draw on European traditions by using narrative accounts of weather and related phenomena contained within documentary sources (such as personal letters, diaries/journals, reports, newspapers, monographs and travelogues) to develop semi-quantitative relative rainfall chronologies. Some of the most widely available materials are those written by early explorers, missionaries, and figures of colonial authority. The use of such evidence permits the reconstruction of rainfall for periods well before the advent of meteorological data collection.
The greatest numbers of regional documentary-based reconstructions are available for southern Africa, which forms the focus of this article. These draw on documentary evidence from a combination of published and unpublished sources, often using available instrumental data for verification and calibration, and span much of the 19th century. Where information density permits, it has been possible to reconstruct rainfall variability down to seasonal scales (see, for example, a study by Nash et al. in 2016). There are, in addition, continent-wide series that integrate narrative information from mainly published sources with available rainfall data (Nicholson et al., 2012, for 90 homogenous rainfall regions across mainland Africa).
An important point to note is that the various reconstructions adopt slightly different methodologies for analyzing documentary evidence. For example, all of the regional studies in southern Africa noted above use a five-point scale to classify annual rainfall (from –2 to +2; extremely dry to extremely wet). Scholars decide how to classify a specific rainy season in a region through qualitative analysis of the collective documentary evidence for that season. In other words, they take into account all quotations describing weather and related conditions. This contrasts with the approach used by Nicholson and colleagues in a 2012 continent-wide rainfall series. In that reconstruction, scholars attributed a numerical score on a seven-point scale (–3 to 3) to each individual quotation according to how wet or dry conditions appear to have been. They then summed and averaged the scores for each item of evidence for a specific region and year. As we will see, these distinct analytical approaches, which may draw on different documentary evidence, may introduce significant discrepancies between rainfall series.
Comparisons between rainfall series
A compilation of all the available annually-resolved rainfall series for mainland southern Africa is shown in Figure 2. This includes seven series (g-m) based exclusively on documentary evidence, four regional series (c-f) from Nicholson et al. (2012) based on combined documentary evidence and rain gauge data, the 19th-century portion of the ships’ logbook reconstruction series (b) by Hannaford et al. (2015), and, for comparison, the 19th-century section of a width-based tree ring rainfall reconstruction (a) for western Zimbabwe by Therrell et al. (2006). With the exception of the Cape Winter Rains series, all are for areas of southern Africa that receive rainfall predominantly during the summer months.
Fig. 2. Annually-resolved rainfall reconstructions for southern Africa, spanning the 19th century. (a) Tree-ring width series by Therrell et al. (2006); (b) Ships’ logbook-based reconstructions by Hannaford et al. (2015); (c-f). Combined documentary and rain-gauge reconstructions by Nicholson et al. (2012); (g-m) Documentary-based reconstructions by (g) Nash et al. (2018), (h) Grab and Zumthurm (2018), (i) Kelso and Vogel (2007), (j) Nash and Endfield (2002, 2008), (k) Nash and Grab (2010), (l) Nash et al. (2016), (m) Vogel (1989).
This compilation shows that, in the 19th century, rainfall varied from place to place across southern Africa. However, we can identify a number of droughts that affected large areas of the subcontinent. Droughts, for example, stretched across southern Africa in the mid-1820s, mid-1830s, around 1850, early-mid-1860s, late-1870s, early-mid-1880s and mid-late-1890s. We can also pinpoint a smaller number of coherent wetter years: in, for example, the rainy seasons of 1863-1864 and 1890-1891. Analyses that use many different climate "proxies" - that is, sources that register but do not directly measure past climate change - indicate that the early-mid 1860s drought was the most severe of the 19th century, and that of the mid-late-1890s the most protracted (see, for example, studies by Neukom et al., 2014, and Nash, 2017).
The inset map in Figure 2 reveals that a number of rainfall series overlap in their geographical coverage, which allows a direct comparison of results. In some cases, the overlap is between series created using very different methodologies. For the most part, there is good agreement between these overlapping series, but there are some significant differences. The rest of this article will focus on two of these periods of difference: the 1810s in southeast Africa, and the 1890s in Malawi.
How dry was the first decade of the 19th century in southeast Africa?
Four rainfall series are available for southeast Africa for the first decade of the 19th century (Figure 3) – documentary series for South Central Africa and the Kalahari (by Nicholson et al., 2012), a tree-ring series for Zimbabwe (Therrell et al., 2005), and a ships’ log series for KwaZulu-Natal (Hannaford et al., 2015). Collectively, these series suggest that there was at least one major drought that potentially affected much of the region.
This was a very important time in the history of southeast Africa. The multi-year drought is remembered vividly in Zulu oral traditions as the ‘mahlatule’ famine (translated as the time we were obliged to eat grass). Scholars have seen it as a trigger for political revolution and reorganization, one that ultimately led to the dominance of the Zulu polity.
Fig. 3. Comparison of three annually-resolved rainfall reconstructions for southeast Africa for the first half of the 19th century, including the tree ring series for Zimbabwe by Therrell et al. (2006), the combined documentary and rain-gauge reconstructions for South Central Africa and the Kalahari by Nicholson et al. (2012), and the ships’ logbook reconstructions for southeast South Africa by Hannaford et al. (2015). The inset map shows the location of each series.
Yet there are some discrepancies between the overlapping records, which have important implications for our understanding of relationships between climate change and society. For example, while the documentary-based South Central Africa series in Figure 3 suggests protracted drought from 1800 to 1811, the overlapping tree ring series for Zimbabwe infers periods of average or above-average rainfall, alternating with drought. A similar contrast is shown between the documentary-based Kalahari series (which encompasses the southern Kalahari but extends to the east coast of South Africa) and the overlapping ships’ logbook-based reconstruction for Royal National Park, KwaZulu-Natal.
Since these series are based on different evidence, it is impossible to tell which is more likely to be ‘right’. However, the rainfall series based on documentary evidence are clearly less sensitive to interannual rainfall variability than those based on ships’ log data or tree rings, at least for the early 19th century. This is surprising, as a major strength of documentary evidence is normally the way that it captures extreme events.
The reasons for these discrepancies are unclear, but are likely to be methodological. The Africa-wide rainfall series by Nicholson and colleagues, from which the South Central Africa and Kalahari series in Figure 3 are derived, is a model of research transparency – it identifies the evidence base for every year of the reconstruction, with all documentary and other data made available via the NOAA National Climatic Data Center. Inspection of this dataset indicates that the reconstructions for the early 1800s in southern Africa are based on a limited number of published monographs and travelogues, written mainly by explorers. While these are likely to include eyewitness testimonies, there is potential for bias towards drier conditions. The majority of authors were western European by birth and, in some cases, their writings reflected their first travels in the subcontinent. It wouldn’t be at all surprising if they found southern Africa significantly drier than home.
How dry was the last decade of the 19th century in Malawi?
The collective evidence for rainfall variability around present-day Malawi during the mid-late 19th century is shown in Figure 4. Here, two rainfall reconstructions overlap: the first, a reconstruction for three regions of the country based primarily on unpublished documentary evidence by Nash et al. (2018); and the South Central Africa series and adjacent rainfall zones of Nicholson et al. (2012).
Fig. 4. Comparison of two annually-resolved rainfall reconstructions for southeast Africa for the second half of the 19th century, including a documentary-based reconstruction for three regions of Malawi (Nash et al., 2018), and the combined documentary and rain-gauge reconstruction for South Central Africa by Nicholson et al. (2012). The inset map shows the location of each series.
Extreme events, such as the droughts of the early-1860s, mid-late-1870s, and mid-late-1880s, and a wetter period centred around 1890-91, are visible in both reconstructions. However, there are discrepancies in other decades, most notably during the 1890s where the Nicholson series indicates mainly normal to dry conditions, and the Nash series a run of very wet years.
Delving deeper into the documentary evidence underpinning the Nicholson series suggests that the discrepancies may again be methodological, and strongly influenced by source materials. As with the other regional reconstructions for southern Africa, the Nash study bases annual classifications on average conditions across a large body of mainly unpublished primary documentary materials. Nicholson, by contrast, uses smaller numbers of mainly published documentary materials, combined with rain gauge data. An over-emphasis of references to dry conditions in these documents, combined with an absence of gauge-data for specific regions and years, could therefore skew the results.
The way forward?
There are two main take-home messages from this article. First, on the basis of a comparison of annually-resolved southern African rainfall series, documentary data appear less sensitive to precipitation variability than other types of proxy evidence, even for some extreme events. Discrepancies are most apparent for periods of the early 19th century, where documentary evidence is relatively sparse.
Second, different approaches to reconstruction may produce different results, especially where documentary evidence is combined with gauge data. The summative approach used by Nicholson and colleagues, for example, where individual quotations are classified, summed and averaged, may be much more sensitive to bias from individual sources when data are sparse.
Having identified these potential issues, one way forward might be to run some experimental studies using different approaches on the same collections of documentary evidence to assess the impact of methodological variability on rainfall reconstructions. This would be no small task, as it would mean re-analyzing some large datasets. However, it would confirm or dismiss the suggestions made here about the relative effectiveness of different methodologies.
These experimental studies would help us to identify the "best practice" for reconstructing African rainfall. They would allow us to improve the robustness of the baseline data available for understanding historical rainfall variability in the continent likely to be most severely impacted by future climate change. They would also permit us to refine our understanding of past relationships between climatic fluctuations and the history of African communities. These relationships may offer some of our best perspectives on the future of African societies in a warming planet.
Brázdil, R. et al. 2005. "Historical climatology in Europe – the state of the art." Climatic Change 70: 363-430.
Grab, S.W. and Zumthurm, T. 2018. "The land and its climate knows no transition, no middle ground, everywhere too much or too little: a documentary-based climate chronology for central Namibia, 1845–1900." International Journal of Climatology 38 (Suppl. 1): e643-e659.
Hannaford, M.J. and Nash, D.J. 2016. "Climate, history, society over the last millennium in southeast Africa." Wiley Interdisciplinary Reviews-Climate Change 7: 370-392.
Hannaford, M.J. et al. 2015. "Early-nineteenth-century southern African precipitation reconstructions from ships' logbooks." The Holocene 25: 379-390.
Kelso, C. and Vogel, C.H. 2007. "The climate of Namaqualand in the nineteenth century." Climatic Change 83: 257-380.
Nash, D.J., 2017. Changes in precipitation over southern Africa during recent centuries. Oxford Research Encyclopedia of Climate Science, doi: 10.1093/acrefore/9780190228620.013.539.
Nash, D.J. and Endfield, G.H. 2002. "A 19th century climate chronology for the Kalahari region of central southern Africa derived from missionary correspondence." International Journal of Climatology 22: 821-841.
Nash, D.J. and Endfield, G.H. 2008. "'Splendid rains have fallen': links between El Nino and rainfall variability in the Kalahari, 1840-1900." Climatic Change 86: 257-290.
Nash, D.J. and Grab, S.W. 2010. "'A sky of brass and burning winds': documentary evidence of rainfall variability in the Kingdom of Lesotho, Southern Africa, 1824-1900." Climatic Change 101: 617-653.
Nash, D.J. et al. 2018. "Rainfall variability over Malawi during the late 19th century." International Journal of Climatology 38 (Suppl. 1): e649-e642.
Nash, D.J. et al. 2016. "Seasonal rainfall variability in southeast Africa during the nineteenth century reconstructed from documentary sources." Climatic Change 134: 605-619.
Neukom, R. et al. 2014. "Multi-proxy summer and winter precipitation reconstruction for southern Africa over the last 200 years." Climate Dynamics 42: 2713-2716.
Nicholson, S.E. et al. 2012. "Spatial reconstruction of semi-quantitative precipitation fields over Africa during the nineteenth century from documentary evidence and gauge data." Quaternary Research 78: 13-23.
Nicholson, S.E. et al. 2018. "Rainfall over the African continent from the 19th through the 21st century." Global and Planetary Change 165: 114-127.
Pfister, C. 2018. "Evidence from the archives of societies: Documentary evidence - overview". In: White, S., Pfister, C., Mauelshagen, F. (eds.) The Palgrave Handbook of Climate History. Palgrave Macmillan, London, pp. 37-47.
Therrell, M.D. et al. 2006. "Tree-ring reconstructed rainfall variability in Zimbabwe." Climate Dynamics 26: 677-685.
Vogel, C.H. 1989. "A documentary-derived climatic chronology for South Africa, 1820–1900." Climatic Change 14: 291-307.
Dr. Dagomar Degroot, Georgetown University
Until recently, it was notoriously difficult to connect today’s extreme weather with the gradual trends of climate change. Scientists shied away from saying, for example, that catastrophic droughts or severe hurricanes reflected the influence of anthropogenic global warming. Yet today, scientists use big data from satellites and weather stations to inform supercomputer simulations that reveal the extent to which warming trends have raised the odds for previously unusual weather. Scientists now report, for example, that the drought that crippled Syria between 2006 and 2009 was between two and three times more likely in today’s climate than it would have been a century earlier. They feel comfortable concluding that the rains of Hurricane Harvey were perhaps 20 times more likely now than they once were. Armed with these statistics, many scholars and journalists now conclude that events like the Syrian Civil War, which unfolded in the wake of that devastating drought, can be convincingly connected to climate change.
Yet how can we link past climate change – change that happened before the advent of big weather data – to human affairs? Many historians and archaeologists favor qualitative methods. They identify weather events in surviving documents, or in paleoclimatic proxy data (such as tree rings, ice cores, or lakebed sediments) that register the influence of temperature or precipitation. Next, they carefully study texts or ruins to determine how these weather events influenced human activities – such as farming, hunting, or sailing – that clearly depended on favorable weather. By looking at enough of these relationships, over a long enough timeframe, they ultimately reach conclusions about the influence of weather trends – that is, climate change – on the human past.
Environmental historians might be most familiar with these qualitative methods. They inform a raft of new books and articles in climate history, on diverse topics that range from the fall of Rome to the colonization of Australia; from the origins of apocalyptic Norse mythology to the travails of Arctic whalers.
But these qualitative methods are much less influential beyond the historical profession. Today, there is a large and rapidly growing “quantitative” school of climate history that instead relies on statistical means to discern the impact of climate change on human history. Papers in this school are cited more frequently in the latest IPCC assessment report, for example, than publications written by historians who prefer more qualitative means of doing history.
Natural scientists, economists, and historical geographers in the quantitative school of climate history quantify diverse social variables in particular regions, then graph their highs and lows over decades, centuries, even millennia. Next, they develop or make use of temperature or precipitation reconstructions for those same regions across identical timescales. Finally, they use statistical methods to find covariance between their graphs of social and climatic trends.
Most published work in this vein finds statistically significant correlations between these trends. In study after study, Chinese historical geographers have found striking correlations between climatic cooling and the wars, rebellions, and dynastic transitions of Imperial China. European scholars have found equally impressive correlations between cool, wet conditions and conflict in northwestern Europe over the past five centuries. In southeastern and central Europe, by contrast, correlations exist between conflict and warm, dry weather. Another, even more ambitious study finds strong correlation between European wars and climate changes over 2,600 years.
Quantitative climate historians often focus on China and Europe, and not only because most of them live in these regions. People across much of China and Europe have long relied on rain-fed agriculture, which should have been especially sensitive to fluctuations in temperature or precipitation. They also kept unusually detailed, and unusually continuous, records of their activities. Yet a growing group of quantitative researchers now concentrates on the much more recent history of sub-Saharan Africa, where millions continue to rely on rain-fed agriculture. Many studies correlate warming, drying trends across Africa to twentieth-century civil wars, although some emphasize that these correlations only existed under the right socioeconomic conditions.
The great appeal of quantitative approaches to climate history is that they seem to replace the messiness of the historian’s craft, and the subjectivity of the qualitative findings, with scientific objectivity and certainty. Quantitative historians have used statistical correlation not only to confidently explain the past, but also to predict the future. Already in 2007, historical geographers concluded, for example, that Chinese “war-peace, population, and price cycles in recent centuries have been driven mainly by long-term climate change.” Two years later, another group controversially concluded that the frequency of civil wars in sub-Saharan Africa would likely increase as the continent warmed, since a regional correlation between temperature and violence existed in the past.
But have quantitative scholars really found a better way of doing climate history, one that at last permits predictions of the kind that always remain frustratingly out of reach for historians and archaeologists? Well, not quite. On close examination, the soaring claims made by many quantitative scholars in fact rest on assumptions that remain frustratingly subjective . . . and at times, simply misguided.
Most importantly, the correlations identified in quantitative work are meaningless unless their trends reflect the right data. Some studies of this kind use weather observations in surviving documents to graph centuries-long trends in temperature or precipitation. Decades ago, the great meteorologist Hubert Lamb relied on much the same method to identify a hot climatic regime that he called the “Medieval Warm Period.” The medieval centuries, Lamb concluded, were at least as warm as the late twentieth century.
While some deniers of anthropogenic global warming still use Lamb’s graph, scholars have changed the name of his period to the “Medieval Climate Anomaly.” It turns out that Lamb, unversed in the art of reading historical sources, simply took medieval references to weather at face value. When the legitimate weather observations examined by Lamb are read more carefully, and used alongside climate reconstructions compiled with more reliable tree ring or ice core data, they reveal a period of modest but erratic warming in the high medieval centuries. Nothing comparable, in other words, with late twentieth-century warming.
The lesson here is that references to weather in ancient documents do not always simply reveal the state of the atmosphere in a particular time and place. The problem is much more acute when considering very long timescales. Before the instrumental era, even seemingly reliable weather observations over decades or centuries are really the product of many observers, some of whom might use different methods to record weather. Moreover, sources that may seem especially dependable at a glance – such as many European chronicles – in fact refer to weather metaphorically, or use fabricated weather events to justify the course of human affairs.
Researchers should therefore strive to use weather observations in historical documents as a starting point – only a starting point! – in a long process of reconstructing a region’s climate. Where possible, documentary evidence should be used alongside climate reconstructions compiled with tree rings, ice cores, lakebed sediments, and the many other proxy sources in natural archives. The best climate reconstructions often use the most proxies. Of course, many excellent reconstructions have now been published for most parts of the world. There is often little need to develop a regional climate reconstruction from scratch.
In quantitative climate history, multi-proxy climate reconstructions should also reveal climatic trends on the same spatial scale as the social variables under consideration. Even some scholars who do use so-called “multiproxy” climate reconstructions to find their correlations go on to match trends of global or hemispheric temperature or precipitation with trends of local or regional historical events. Yet before the onset of anthropogenic global warming, climatic trends rarely unfolded at the same time in every part of the globe. A general cooling trend across the northern hemisphere, for example, did not always lead to colder temperatures in China.
If quantitative climate historians face problems when choosing which climate reconstructions to use – or how to make them – these pale in comparisons to those that bedevil their attempts to quantify social variables. Quantitative studies of war, for example, have used makeshift and now defunct websites to determine when wars began and ended. Others have relied on historical scholarship that is well over a century old. It is as though the historical geographers, political scientists, natural scientists, and economists who typically write quantitative climate history do not recognize that the disciplines of history and archaeology are as rigorous and dynamic as their own. Naturally, correlations that rely on obsolete or untrustworthy data about the human past can tell us little about the influence of climate change on human history.
Jan de Vries, Philip Slavin, and I have also flagged a second big problem faced by quantitative approaches to climate history. Studies that find correlations over centuries, let alone millennia, rarely appreciate that social variables change through time. A statistically significant correlation between warming and economic growth in the high medieval centuries, for example, does not necessarily hint at the same kind of relationship between climate change and human affairs as a similar correlation several hundred years later. Over the course of those centuries, the cultural, economic, social, and political pathways by which climate change affects human life may have fundamentally changed, and the individuals who control those pathways will have obviously died. The question becomes: what are quantitative approaches to climate change really measuring?
That gets us into a third, and related, problem of quantifying the human past. It is one thing to quantify a particular kind of agricultural production over long timespans. Though agricultural practices can change dramatically over those timespans, even in pre-modern societies, scholars may still find correlations between agricultural yields and climatic trends that can suggest something new about the human past. Yet it is quite another matter to quantify the number or intensity of a major social event, such as a war.
Attempts to link the number of wars by decade to decadal temperature or precipitation, for example, face the challenge of quantifying long and complex wars: precisely the kind of war that often placed the greatest strain on agricultural resources also affected by climate change. Scholars might consider the Thirty Years’ War, for example, as either a single war or a series of wars, and their subjective choice would determine the correlation identified in a study between seventeenth-century climate change and European conflict. In some of these studies, the early seventeenth century may look like a time of relative peace in Germany!
Scientists have also used arbitrary numbers to decide when violence amounts to a war. Does violence rise to the status of war when at least 1,000 people have died, as some studies assume? Presumably the standard would be higher in very populous societies and lower in less populated ones, but this distinction is never made in quantitative studies. Graphing wars by quantity can also lead scholars to misrepresent changes in quality. Scholars might easily count the First and Second World Wars as only two wars, for example, yet of course their material and human costs dwarfed those of any previous conflict. If problems of this nature plague the superficially simple task of correlating the number of wars to temperature trends, imagine the challenges of determining similar correlations with, for example, economic development or cultural efflorescence!
It turns out that quantitative approaches to climate history often obscure more than they reveal. Far from providing a more objective, “scientific” way of understanding the impact of climate change on the human past, they really rely on assumptions that are every bit as subjective as those made in more qualitative work. Yet unlike many qualitative climate historians, they leave those assumptions unacknowledged.
I am convinced that quantitative climate historians could fruitfully address at least some of these problems by interacting more with qualitative scholars, most of whom work in the humanities. Unfortunately, many historians, at least, have not heard of quantitative approaches to climate history, while most quantitative scholars have little inkling of qualitative approaches to their subject. Remarkably, I have never seen a work of qualitative climate history cited within a paper that aims to identify correlation. Part of the problem is that quantitative and qualitative scholars often work in different media. While historians prioritize books, most scholars value short, multi-authored studies.
Yet collaboration is surely possible, and if so it would undoubtedly prove productive. Quantitative scholars have recently used statistical means to identify not only how climate change might be correlated to human activities, but also how it might have partly accounted for – that is, caused – those activities. Such studies have yielded models that are really variants of models that qualitative scholars had already developed. What if they had worked with qualitative scholars from the start? Meanwhile, qualitative scholars often use statistics to support their conclusions, without always understanding what those statistics actually reveal (or what they don’t). What if qualitative scholars consulted colleagues in more quantitative disciplines while developing these statistics?
At the Climate History Network, we will strive to incorporate more quantitative scholars within our ranks. Perhaps we will be able to build a shared community in the coming years, one that will yield a more comprehensive kind of climate history.
Buhaug, Halvard. “Climate not to blame for African civil wars.” Proceedings of the National Academy of Sciences 107:38 (2010): 16477-16482.
Büntgen U. et al. “2500 years of European climate variability and human susceptibility.” Science 331:6017 (2011): 578-582.
Burke, Marshall B. et al. “Climate robustly linked to African civil war.” Proceedings of the National Academy of Sciences 107:51 (2010): E185-E185.
Burke, Marshall B. et al. “Warming increases the risk of civil war in Africa." Proceedings of the National Academy of Sciences 106:49 (2009): 20670-20674.
Degroot, Dagomar. “Climate Change and Conflict,” in The Palgrave Handbook of Climate History, eds. Christian Pfister, Franz Mauelshagen, and Sam White. Basingstoke: Palgrave Macmillan, 2018.
Slavin, Philip. “Climate and famines: a historical reassessment.” Wiley Interdisciplinary Reviews: Climate Change 7:3 (2016):
Theisen, Ole Magnus. “Climate clashes? Weather variability, land pressure, and organized violence in Kenya, 1989-2004.” Journal of Peace Research 49:1 (2012): 81-96.
Theisen, Ole Magnus, Helge Holtermann, and Halvard Buhaug. “Climate wars? Assessing the claim that drought breeds conflict,” International Security 36:3 (2011): 79-106.
Tol, Richard and Sebastian Wagner. “Climate change and violent conflict in Europe over the last millennium,” Climatic Change 99 (2010): 65-79.
Zhang, David. “Climate Change and War Frequency in Eastern China over the Last Millennium,” Human Ecology 35 (2007): 403-414.
Zhang, David and Harry Lee, “Climate Change, Food Shortage and War: A Quantitative Case Study in China during 1500-1800,” Catrina 5:1 (2010): 63-71.
Zhang, David et al., “Climatic change, wars and dynastic cycles in China over the last millennium,” Climatic Change 76 (2006): 459-477.
Zhang, David, Peter Brecke, Harry F. Lee, Yuan-Qing He, and Jane Zhang. “Global climate change, war, and population decline in recent human history.” Proceedings of the National Academy of Sciences 104:49 (2007): 19214-19219.
Zhang, David. “The causality analysis of climate change and large-scale human crisis,” Proceedings of the National Academy of Sciences 108:42 (2011): 17296-17301.
Zhang, Dian et al., “Climate change, social unrest and dynastic transition in ancient China,” Chinese Science Bulletin 50:2 (2005): 137-144.
Zhibin Zhang, “Periodic climate cooling enhanced natural disasters and wars in China during AD 10-1900,” Proceedings of the Royal Society 277 (2010): 3745-3753.
Dr. Dagomar Degroot, Georgetown University
Earth’s climate is changing with terrifying speed. Humanity has added several hundred billion tons of carbon dioxide to the atmosphere, strengthening a greenhouse effect that has now warmed the planet by roughly one degree Celsius. The scale, speed, and causes of today’s global warming have no precedent, but of course natural forces have always changed Earth’s climate. We now know that these changes were big enough to shape the fates of past societies. Most confronted disaster, but a few seemed to prosper in spite of – and in some cases because of – climate changes. Perhaps the most successful of all emerged in the coastal fringes of the present-day Netherlands. It has left us with lessons that may offer new perspectives on our fate in a warmer world.
To contextualize present-day warming, paleoclimatologists have scoured the globe for signs of past climate change. They have found layers buried deep in glacial ice and cave stalagmites, sediments embedded in lakebeds and ocean floors, and rings wound around tree trunks and stony corals. All bear silent testament to ancient weather. Together, they reveal that, sometime in the thirteenth century, Earth’s climate started cooling.
Huge volcanic eruptions lofted dust high into the stratosphere that blocked incoming sunlight. The Sun itself slipped into a dormant phase, sending less energy to the Earth. A long-running shift in Earth’s axial tilt gradually reduced the amount of solar energy that reached the northern hemisphere. Sea ice expanded, wind patterns changed, and ocean currents altered their flow. Patterns of precipitation fluctuated dramatically, bringing torrential rains to some places, and unprecedented droughts to others. A long “Little Ice Age” had begun.
A tree ring reconstruction of average summer temperatures in the Northern Hemisphere over the past 2,500 years (red), with a thirty-year moving average (blue). The baseline (“0”) is the late twentieth-century average. Temperatures in the seventeenth century were cold but erratic. Developed from M. Sigl et al., “Timing and Climate Forcing of Volcanic Eruptions for the Past 2,500 Years,” Nature 523 (2015): 545.
In the closing decades of the sixteenth century, this Little Ice Age reached its chilliest point across much of the northern hemisphere. By then, the world had cooled by nearly one degree Celsius, relative to average temperatures in the twentieth century. In many places, weather had also grown more volatile and less predictable from year to year, season to season. Despite its name, the Little Ice Age involved more than constant cooling.
Historians, historical geographers, and archaeologists have argued that the onset of the coldest and most erratic phase of the Little Ice Age could not have come at a worse time. For centuries, populations in the greatest empires of the day had steadily increased. By the sixteenth century, millions depended on crops stubbornly cultivated in arid, unproductive farmland. When falling temperatures shortened growing seasons, when monsoons failed, or when storms flooded fields, harvests in these regions failed again and again.
Many farmers responded by swapping crops that prefer warm, stable weather for those that cope better with cold, volatile conditions. Some diversified their fields. Yet often there was just no dealing with droughts, torrential rains, or cold snaps that lasted for longer than a year or two. Famine and then starvation spread from the plains of the Aztec Empire to the woodlands of the Mutapa Kingdom, from the steppes of the Grand Duchy of Moscow to the rice fields of the Ming Dynasty.
The worst was yet to come. Temperature and precipitation extremes sickened plants and animals alike, compounding food shortages. As temperatures dropped, farmers huddled in huts with their ailing livestock. In those conditions, diseases spread easily from animals to people. Malnourished human bodies, meanwhile, have weak immune systems, which makes them easy prey for bacteria and viruses. Changing weather patterns also altered the range of insects that carried disease pathogens, bringing new and deadly ailments to the previously unexposed. In empire after empire, millions fled from the famine-stricken countryside, unwittingly infected by diseases that they carried to cities. Where famine lingered, epidemic outbreaks often followed.
In one empire after another, the sick and starving blamed governments for their misery. They were usually right. Few governments responded constructively to the crises they faced, and most made them worse by, for example, increasing taxes or embarking on wars. The coldest stretch of the Little Ice Age therefore coincided with an unprecedented surge of revolts and civil wars. Rebel and state armies alike conscripted farm laborers from the already overburdened countryside, imposed new demands on marginal farmland, and joined refugees in spreading disease. In the end, millions died.
Yet remarkably, inhabitants of the Dutch Republic – the precursor state to today’s Netherlands – enjoyed a Golden Age that perfectly coincided with the chilliest century of the Little Ice Age. Somehow, a country with about as many people as Providence, Rhode Island emerged as a European great power, with a navy that went from victory to victory, an army that held the mighty Spanish Empire at bay, and a commercial fleet that dwarfed all others. Today, the art of Rembrandt and Vermeer – painted in the coldest years of the Little Ice Age – gives a distant echo of the energy and prosperity of those incredible times.
The Dutch Republic was something of an oddball in the seventeenth-century world. The overwhelming majority in most societies toiled in rural fields, growing crops for local markets. Many Dutch farmers, by contrast, cultivated cash crops for distant consumers. The republic therefore depended on a steady flow of grain imports from the rich and diverse farmland along the Baltic Sea. Over time, a growing share of Dutch citizens worked in commercial interests and industries with headquarters in or near port cities that would have been underwater, were it not for an extensive network of dikes and sluices. Urbanization rates were soon higher in the republic than they were just about anywhere else. Meanwhile, tens of thousands of sailors plied Dutch trades that reached deep into the Arctic, the Americas, Africa, and Asia.
Sailing depended on two things: favorable winds and open, ice-free water. By changing currents and cooling temperatures in the atmosphere and oceans, the chilliest stretches of the Little Ice Age therefore affected sailing as much as farming. Yet the impact was very different. New wind patterns actually sped up ships that left the republic for Asia or America, shortening their journeys.
In the waters off northern Europe, storms were unusually frequent and severe in the coldest stretches of the Little Ice Age. Many ships foundered, and many sailors drowned. Yet crews aboard the republic’s biggest merchant ships – ones that carried the richest cargo from distant markets – weathered storms much better than sailors aboard other European ships. In fact, storms often benefitted Dutch sailors by further increasing the speed of these big ships.
Even sea ice aided the Dutch, including in the Arctic. It took plenty of sea ice – but not too much – to redirect Dutch voyages of northern exploration towards the rich bowhead whale feeding grounds off the archipelago of Svalbard, which lies between the northern coast of Norway and the North Pole. Whalers from all over Europe soon set up shop there. For a long time, the edge of the Arctic pack ice lingered near Dutch whaling stations, and since whales gathered along the edge of the ice, the Dutch benefited. By following the ice edge west, Dutch whalers even found whale breeding grounds off the little island of Jan Mayen.
The Dutch fought most of their wars on or around water. Climatic cooling may have benefited their armies and fleets even more than their merchants. The Dutch flooded their own farmland to thwart Spanish and later French invasions. Some of these floods would not have succeeded without torrential rains that reflected new atmospheric realities.
Later in the seventeenth century, cooling coincided with a shift in the strength of atmospheric high and low pressure zones over the Atlantic Ocean, which sharply increased the frequency of easterly winds over northern Europe. Sailors aboard Dutch warships heading into battle from the republic often had what was then called the “weather gage:” the upwind position from a downwind opponent. That allowed them to decide exactly how and when to deploy new “line of battle” tactics, in which warships would sail by each other in single file while firing broadsides. New wind patterns played a role in helping the Dutch win wars they might otherwise have lost.
Still, climate change did not always aid the Dutch. In the Arctic, sea ice crushed ships, drowned sailors, and screened whales from whalers. Sailors in small ships that carried grain and timber from the Baltic Sea endured violent storms and confronted thick sea ice that blocked their way. Cold snaps in the Baltic occasionally led to harvest failures that imperiled the republic’s precious grain imports. Ice repeatedly blocked the waterways of the republic, suffocating travel between cities and raising the specter of flooding when the ice thawed. Sometimes, ice froze rivers that otherwise served as barriers to invasion. Left unattended, candles and stoves in cold winter weather kindled fires that swept through the cities of the republic.
Time and again, the Dutch responded creatively. Shipwrights fortified the hulls of whaling ships and greased them until they slid off ice. Civilians and soldiers hacked through ice to preserve open water in their defensive rivers. Guilds and city governments bought icebreakers that not only kept waterways open, but actually manufactured ice blocks for use in cellars. When the ice was too thick, the Dutch used skates and sleds to turn frozen canals into busy thoroughfares. Merchants divided their goods between different ships, and invested in marine insurance. They stockpiled Baltic grain in good years, and sold it for healthy profits whenever food shortages plagued Europe. Charities maintained a steady supply of food for the urban poor. Inventors pioneered new firefighting tactics and equipment, and made good money selling them across Europe.
The Dutch, in short, were lucky to benefit from environmental changes that favored their unusual economy. But they also made their own luck. The society they built ended up being remarkably resilient in the face of new weather patterns that spelled disaster elsewhere in Europe. By relying so heavily on farmers scratching out a meagre existence on marginal farmland, other civilizations developed vulnerabilities to climate change that simply did not exist in the Dutch Republic.
In fact, the Dutch may even have adapted their technologies and policies to exploit the Little Ice Age, though they may not have recognized the trends in weather that we call climate change. Why were they so flexible in the face of changing environmental circumstances? In part, the answer may lie in their long history of draining and damming the Low Countries. The Dutch long understood that environments can change, and that societies can either adapt or succumb.
There was a darker side to the republic’s prosperity. The Dutch thrived in part by preying on communities and civilizations the world over. They shattered Iberian trading monopolies in Asia, seized expansive territories in the Americas, overwhelmed English whalers in the Arctic, and infamously broke into an African slave trade that cruelly exploited millions of people. The weather extremes of the Little Ice Age had often weakened communities that the Dutch victimized. In the republic, adaptation to climate change could take the form of a parasitic kind of opportunism that leveraged vulnerabilities in other societies.
What, then, can the history of the republic’s frigid Golden Age teach us today? First and perhaps most importantly, it shows us that even relatively small changes in Earth’s average temperature can have enormous social consequences. Across much of the seventeenth-century world, the gloomiest predictions for our warmer future came true. A third of humanity may have died in disasters either set in motion or worsened by climate change.
The world has already warmed more, relative to average temperatures in the twentieth century, than it cooled in the chilliest stretches of the Little Ice Age. Our best projections suggest that it will warm by roughly three degrees Celsius in the coming century, if and only if countries follow through on their Paris Agreement pledges. Histories of the Little Ice Age therefore give us an urgent call to arms. We have technologies that our ancestors could not have imagined. But there are far more of us, consuming unimaginably more plants and animals, metals and fuels. And we too depend on a huge network of fields and fisheries that may not survive drastic changes in temperature and precipitation.
That leads us to our second lesson: climate change has had, and probably will have, very unequal consequences for different societies, communities, and individuals. Many assume that rich societies cope best with climate change. Yet some of the wealthiest seventeenth-century empires actually fared worst in the coldest and most volatile years of the Little Ice Age. Climate change, it seems, imperils not only societies that have few resources to exploit, but also those that require abundant resources to prosper.
The Dutch thrived in the seventeenth century not because their republic was rich, but because much of its wealth derived from activities that climate change benefited. Today, we can learn from the republic by strengthening social safety nets, investing in technologies that exploit or reduce climate change, and more broadly by thinking proactively about how we will adapt to the warmer planet of our future. We can learn from the Dutch in another way too, by strengthening bonds between countries and communities, rather than preying on the most vulnerable.
Ultimately, the lessons of the past come to us in the form of parables: stories that hint at deeper truths but do not tell us exactly what to do. That does not make them any less valuable. We now know that we cannot ignore our changing climate, that it will shape our fortunes in the decades to come. Let us use the warnings of the past to confront the looming catastrophe in our future, while we still can.
This article summarizes some important ideas in my new book, The Frigid Golden Age: Climate Change, the Little Ice Age, and the Dutch Republic, 1560-1720. You can buy the hardcover on the Cambridge University Press website or on Amazon, and you'll soon be able to purchase the paperback.
The Washington Post published a modified and much shorter version of this article. You can find it here.
Dr. Tim Newfield, Princeton University, and Dr. Inga Labuhn, Lund University.
Carolingian mass grave, Entrains-sur-Nohain, INRAP.
Will climate change trigger widespread food shortages and result in huge excess mortality in our future? Many historians have argued that it has before. Anomalous weather, abrupt climate change, and extreme dearth often work together in articles and books on early medieval demography, economy and environment. Few historians of early medieval Europe would now doubt that severe winters, droughts and other weather extremes led to harvest failures and, through those failures, food shortages and mortality events.
Most remaining doubters adhere to the idea that food shortages had causes internal to medieval societies. Instead of extreme weather or abrupt climate change, they blame accidents of (population) growth, deficient agrarian technology, unequal socioeconomic relations and weak institutions. Yet only rarely they have stolen the show or dominated the scholarship. For example, Amartya Sen’s “entitlement approach” to subsistence crises, which assigns primary importance to internal processes, has made few inroads in the literature on early medieval dearth, although in later periods it has many adherents.
Of course, the idea that big events have a single cause – monocausality, in other words – rarely convinces historians for long. Famine theorists and historians of other eras and world regions now argue that neither external forces such as weather, nor internal forces such as entitlements, alone capture the complexity of food shortages. They propose that these two explanatory mechanisms, often labeled “exogenous” and “endogenous,” respectively, should not be considered independent of one another or mutually exclusive. To them, periods of dearth can be explained by environmental anomalies, like unusual and severe plant-damaging weather, that coincide with socioeconomic vulnerability and declining (for most people) entitlement to food.
These explanations are more convincing. It seems that diverse factors acted in concert to cause, prolong and worsen food shortages. But proof for complex explanations for dearth in the distant past is hard to come by. Though they can be misleading, simpler, linear explanations are much easier to pull out of the extant evidence. This is true even when the sources are plentiful, as they are, at least by early medieval standards, for some regions and decades of Carolingian Europe. Food shortages in the Carolingian period, especially those that occurred during the reign of Charlemagne, have attracted the attention of scholars since the 1960s.
Left: Bronze equestrian statuette of Charlemagne or possibly his grandson Charles the Bald (823-877). Discovered in Saint-Étienne de Metz and now in the Louvre. The figure is ninth century in date. The horse might be earlier and Byzantine. Charles the Bald ruled the western portion of the post-Verdun empire, although whether he was actually bald is still debated.
Right: A Carolingian denarius (812-814) depicting Charlemagne. The Charlemagne of the Charlemagne reliquary mask (Center) is handsomer. The coin, though, is contemporary and the bust is from the mid fourteenth century. Housed in the Aachener Dom’s treasury, it contains a skullcap thought to be that of the emperor.
For the Carolingian period, ordinances from the royal court, capitularies, reveal hoarding and speculation, and document official attempts to control the prices and movements of grain, while annalists and hagiographers recount severe winters and droughts. All of this evidence sheds light on dearth. Yet the legislative acts point to internal pressures on food supply, while the narrative sources highlight external ones. As we have seen, neither pressure adequately explains subsistence crises alone.
Unfortunately, however, we rarely have evidence for endogenous and exogenous factors at the same time. Around the year 800, when Leo III crowned Charlemagne imperator, most evidence for dearth comes from the capitularies. Before and after, narrative evidence dominates. So Charlemagne’s food shortages appear to have had internal drivers, and Charles the Bald’s external ones. Or so the written sources lead us to believe.
Carolingian Europe as of August 843 following the Treaty of Verdun. Under rex and imperator Charlemagne (742-814), Carolingian territory stretched to include the area of Europe outlined here.
Fortunately, evidence from other disciplines allows historians to fill in some of the gaps. External pressures are easier to establish by turning to the palaeoclimatic sciences. Using them, we are beginning to rewrite the history of continental European dearth, weather and climate from 750 to 950 CE. We are working on a new study that combines a near-exhaustive assessment of Carolingian written evidence for subsistence crises and weather with scientific evidence for changes in average temperature, precipitation, and volcanic activity (which can influence climate).
We are trying to answer some big questions, such as: What role did droughts, hard winters and extended periods of heavy rainfall have in sparking, prolonging or worsening Carolingian food shortages? Were these external forces the classic triggers of dearth that many early medievalists think they were?
Indicators of past climate embedded in trees and ice can test and corroborate observations of anomalous temperature and precipitation. For instance, the droughts of 794 and 874 CE, documented respectively in the Annales Mosellani and Annales Bertiniani, show up in the tree ring-based Old World Drought Atlas (OWDA, see below). Additionally, as McCormick, Dutton and Mayewski demonstrated, multiple severe Carolingian winters also align fairly neatly with atmosphere-clouding Northern Hemisphere volcanism reconstructed using the GISP2 Greenlandic ice core.
The Old World Drought Atlas (OWDA) for 794 and 874. Negative values indicate dry conditions, positive values indicate wet conditions (from Cook et al. 2015).
By marrying written and natural archives, we are able to perfect our appreciation of the scale and extent of the weather extremes that coincide with Carolingian periods of dearth. Yet instead of simply providing answers, our integrated data are raising questions, and pushing us towards a messier history of early medieval food shortage. This is because the independent lines of evidence often do not agree. For example, only two of the 15 driest years between 750 and 950 CE in the OWDA coincide with drought in Carolingian sources.
Admittedly, some of this dissonance may be artificial. The written record for weather and dearth is incomplete. To be sure, some places and times during the Carolingian era, broadly defined as it is here, are poorly documented. So reported drought years can appear kind of wet in the tree-based OWDA in some Carolingian regions (parts of northern Italy and Provence in 794 and 874 for instance).
Moreover, the detailed or “high-resolution” palaeoclimatology available now for early medieval Europe is much better for some regions than others. Tree-ring series extending back to 750 presently exist for few European regions. It is simply not possible to precisely pair some reported weather extremes or dearths to palaeoclimate reconstructions. Indeed, spatially the two lines of evidence can be mismatched. They can also be seasonally inconsistent, as the trees tell us far less about temperature and precipitation in the winter than they do for the summer.
Matches between historical and scientific evidence are therefore generally limited to the growing seasons, in places where written sources and palaeoclimate data overlap. That is enough to yield some surprising results. When the written record is densest, there is natural evidence for severe weather and rapid climate change, but not for food shortages.
Take the dramatic drop in average temperatures registered in European trees at the opening of the ninth century. According to the 2013 PAGES 2K Network European temperature reconstruction, temperatures were cooler around the time of Charlemagne’s coronation than they had been at any time between the mid sixth and early eleventh centuries. This dramatic cooling aligns well with a relatively small Northern Hemisphere volcanic eruption, detected in the recent ice-core record of volcanism led by Sigl. The eruption would have ejected sunlight-scattering sulfur aerosols into the atmosphere. Notably, larger events in the Carolingian era, like those of 750, 817 and 822, clearly had less of an influence on European temperature. The cold of 800 is equally pronounced but less unusual in a tree-based temperature reconstruction from the Alps. In this series, the late 820s are remarkably cooler.
Documentary sources register the falling temperatures. The Carolingian Annales regni francorum report severe growing-season frosts (aspera pruina) in 800. The Irish Annals of Ulster document a difficult and mortal winter in an entry quite possibly misdated in the Hennessy edition at 798 (799 or the 799/800 winter is more likely). Yet surprisingly, there is no contemporary record of food shortages in Europe.
Top: European Temperature Reconstruction, 0-2000 CE (data from Pages 2K Consortium, 2013).
Bottom: Middle Red: PAGES 2K 2013 Consortium European temperatures; Middle Burgundy: Büntgen et al 2011 Alpine temperature reconstruction; Top: Sigl et al 2015 ice-core record of Global Volcanic Forcing (GVF); Bottom: Written evidence for food shortages, both famines (F) and lesser shortages (LS). ‘W’ indicates no evidence for dearth but evidence for extreme weather. Between 750 and 950 we have identified 23 food shortages: 12 spatially and temporally circumscribed lesser shortages and 11 large multi-year famines.
Scholars tend to focus on instances when the written evidence for dearth and the natural evidence for anomalous weather align tidily. It seems that just as often, however, the two lines of evidence do not match so neatly. Severe weather may not always have triggered dearth in the early Middle Ages. Contemporary peoples could apparently cope with weather extremes in ways that allowed them to escape food shortages.
Early medieval vulnerability to external forces of dearth seems to have varied over space and time. We need to investigate the contrasting abilities of peoples from different early medieval regions and subperiods, participating in distinct agricultural economies with their own agrarian technologies, to withstand plant-damaging environmental extremes.
Several studies already suggest early medievals were capable of responding to gradual climate change. But to argue that they were not rigid or helpless when faced with marked seasonal temperature or precipitation anomalies, we must first identify, from sparse sources, potential moments of resilience. In this we run the risk of reading too much into absences of evidence. Yet the conclusion seems inescapable: when written sources are relatively abundant and there is no record of dearth during notable deviations in temperature and precipitation, early medievals must have adapted successfully.
Going forward, we must identify both moments and mechanisms of early medieval resilience in the face of climate change. Teasing these out from diverse sources might be tough going, but these elements are missing from the history of early medieval dearth and climate. Their omission has allowed for misleadingly neat histories of climate change and disaster in the period. Similar problems might well plague other histories that too clearly link climate changes to food shortages and mortality crises. Research that complicates these links could offer compelling new insights about our warmer future.
Authors' note: this is a short sampling of a much longer and more detailed multidisciplinary examination of Carolingian dearth, weather and climate, currently in preparation.
P. Bonnassie, “Consommation d’aliments immondes et cannibalisme de survie dans l’Occident du Haut Moyen Âge” Annales: Économies, Sociétés, Civilisations 44 (1989), pp. 1035-1056.
U. Büntgen et al, “2,500 Years of European Climate Variability and Human Susceptibility” Science 331 (2011), pp. 578-582.
U. Büntgen and W. Tegel, “European Tree-Ring Data and the Medieval Climate Anomaly” PAGES News 19 (2011), pp. 14-15.
F. Cheyette, “The Disappearance of the Ancient Landscape and the Climatic Anomaly of the Early Middle Ages: A Question to be Pursued” Early Medieval Europe 16 (2008), pp. 127-165.
E. Cook et al, “Old World Megadroughts and Pluvials during the Common Era” Science Advances 1 (2015), e1500561.
S. Devereux, Theories of Famine (Harvester Wheatsheaf, 1993).
R. Doehaerd, Le Haut Moyen Âge occidental: Economies et sociétés (Nouvelle Clio, 1971).
P.E. Dutton, “Charlemagne’s Mustache” and “Thunder and Hail over the Carolingian Countryside” in his Charlemagne’s Mustache and Other Cultural Clusters of a Dark Age (Palgrave, 2004), pp. 3-42, 169-188.
M. McCormick, P.E. Dutton and P. Mayewski, “Volcanoes and the Climate Forcing of Carolingian Europe, A.D. 750-950” Speculum 82 (2007), pp. 865-895.
T. Newfield, “The Contours, Frequency and Causation of Subsistence Crises in Carolingian Europe (750-950)” in P. Benito i Monclús ed., Crisis alimentarias en la edad media: Modelos, explicaciones y representaciones (Editorial Milenio, 2013), pp. 117-172.
PAGES 2k Network, “Continental-Scale Temperature Variability during the Past Two Millennia” Nature Geoscience 6 (2013), pp. 339-346.
K. Pearson, “Nutrition and the Early Medieval Diet” Speculum 72 (1997), pp. 1-32.
A. Sen, Poverty and Entitlements: An Essay on Entitlement and Deprivation (Oxford University Press, 1981).
M. Sigl et al, “Timing and Climate Forcing of Volcanic Eruptions for the Past 2,500 Years” Nature 523 (2015), pp. 543-549.
P. Slavin, “Climate and Famines: A Historical Reassessment” WIREs Climate Change 7 (2016), pp. 433-447.
A. Verhulst, “Karolingische Agrarpolitik: Das Capitulare de Villis und die hungersnöte von 792/793 und 805/806” Zeitschrift fur Agrargeschichte und Agrarsoziologie 13 (1965), pp. 175-189.
It's Maunder Minimum Month at HistoricalClimatology.com. This is our first of two feature articles on the Maunder Minimum. The second, by Gabriel Henderson of Aarhus University, will examine how astronomer John Eddy developed and defended the concept.
Although it may seem like the sun is one of the few constants in Earth’s climate system, it is not. Our star undergoes both an 11-year cycle of waning and waxing activity, and a much longer seesaw in which “grand solar minima” give way to “grand solar maxima.” During the minima, which set in approximately once per century, solar radiation declines, sunspots vanish, and solar flares are rare. During the maxima, by contrast, the sun crackles with energy, and sunspots riddle its surface.
The most famous grand solar minimum of all is undoubtedly the Maunder Minimum, which endured from approximately 1645 until 1720. It was named after Edward Maunder, a nineteenth-century astronomer who painstakingly reconstructed European sunspot observations. The Maunder Minimum has become synonymous with the Little Ice Age, a period of climatic cooling that, according to some definitions, endured from around 1300 to 1850, but reached its chilliest point in the seventeenth century.
During the Maunder Minimum, temperatures across the Northern Hemisphere declined, relative to twentieth-century averages, by about one degree Celsius. That may not sound like much – especially in a year that is, globally, still more than one degree Celsius hotter than those same averages – but consider: seventeenth-century cooling was sufficient to contribute to a global crisis that destabilized one society after another. As growing seasons shortened, food shortages spread, economies unraveled, and rebellions and revolutions were quick to follow. Cooling was not always the primary cause for contemporary disasters, but it often played an important role in exacerbating them.
Many people – scholars and journalists included – have therefore assumed that any fall in solar activity must lead to chillier temperatures. When solar modelling recently predicted that a grand solar minimum would set in soon, some took it as evidence of an impending reversal of global warming. I even received an email from a heating appliance company that encouraged me to hawk their products on this website, so our readers could prepare for the cooler climate to come! Of course, the warming influence of anthropogenic greenhouse gases will overwhelm any cooling brought about by declining solar activity.
In fact, scientists still dispute the extent to which grand solar minima or maxima actually triggered past climate changes. What seems certain is that especially warm and cool periods in the past overlapped with more than just variations in solar activity. Granted, many of the coldest decades of the Little Ice Age coincided with periods of reduced solar activity: the Spörer Minimum, from around 1450 to 1530; the Maunder Minimum, from 1645 to 1720; and the Dalton Minimum, from 1790 to 1820. However, one of the chilliest periods of all – the Grindelwald Fluctuation, from 1560 to 1630 – actually unfolded during a modest rise in solar activity. Volcanic eruptions, it seems, also played an important role in bringing about cooler decades, as did the natural internal variability of the climate system. Both the absence of eruptions and a grand solar maximum likely set the stage for the Medieval Warm Period, which is now more commonly called the Medieval Climate Anomaly.
This gets to the heart of what we actually mean when we use a term like “Maunder Minimum” to refer to a period in Earth’s climate history. Are we talking about a period of low solar activity? Or are we referring to an especially cold climatic regime? Or are we talking about chilly temperatures and the changes in atmospheric circulation that cooling set in motion? In other words: what do we really mean when we say that the Maunder Minimum endured from 1645 to 1720? How does our choice of dates affect our understanding of relationships between climate change and human history in this period?
To find an answer to these questions, we can start by considering the North Sea region. This area has yielded some of the best documentary sources for climate reconstructions. They allow environmental historians like me to dig into exactly the kinds of weather that grew more common with the onset of the Maunder Minimum. In Dutch documentary evidence, for example, we see a noticeable cooling trend in average seasonal temperatures that begins around 1645. On the surface of things, it seems like declining solar activity and climate change are very strongly correlated.
And yet, other weather patterns seem to change later, one or two decades after the onset of regional cooling. Weather variability from year to year, for example, becomes much more pronounced after around 1660, and that erraticism is often associated with the Maunder Minimum. Severe storms were more frequent only by the 1650s or perhaps the 1660s, and again, such storms are also linked to the Maunder Minimum climate. In the autumn, winter, and spring, easterly winds – a consequence, perhaps, of a switch in the setting of the North Atlantic Oscillation – increased at the expense of westerly winds in the 1660s, not twenty years earlier.
A depiction of William III boarding his flagship prior to the Glorious Revolution of 1688. Persistent easterly, "Protestant" winds brought William's fleet quickly across the Channel, and thereby made possible the Dutch invasion of England. For more, read my forthcoming book, "The Frigid Golden Age." Source: Ludolf Bakhuizen, "Het oorlogsschip 'Brielle' op de Maas voor Rotterdam," 1688.
All of these weather conditions mattered profoundly for the inhabitants of England and the Dutch Republic: maritime societies that depended on waterborne transportation. Rising weather variability made it harder for farmers to adapt to changing climates, but often made it more profitable for Dutch merchants to trade grain. More frequent storms sank all manner of vessels but sometimes quickened journeys, too. Easterly winds gave advantages to Dutch fleets sailing into battle from the Dutch coast, but westerly winds benefitted English armadas. If we define the Maunder Minimum as a climatic regime, not (just) a period of reduced sunspots, and if we care about its human consequences, what should we conclude? Did the Maunder Minimum reach the North Sea region in 1645, or 1660?
These problems grow deeper when we turn to the rest of the world. Across much of North America, temperature fluctuations in the seventeenth century did not closely mirror those in Europe. There was considerable diversity from one North American region to another. Tree ring data suggests that northern Canada appears to have experienced the cooling of the Maunder Minimum. Western North America also seems to have been relatively chilly in the seventeenth century, although there chillier temperatures probably did not set in during the 1640s.
By contrast, cooling was moderate or even non-existent across the northeastern United States. Chesapeake Bay, for instance, was warm for most of the seventeenth century, and only cooled in the eighteenth century. Glaciers advanced in the Canadian Rockies not in the seventeenth century, but rather during the early eighteenth century. Their expansion was likely caused by an increase in regional precipitation, not a decrease in average temperatures.
Still, the seventeenth century was overall chillier in North America than the preceding or subsequent centuries, and landmark cold seasons affected both shores of the Atlantic. The consequences of such frigid weather could be devastating. The first settlers to Jamestown, Virginia had the misfortune of arriving during some of the chilliest and driest weather of the Little Ice Age in that region. Crop failures contributed to the dreadful mortality rates endured by the colonists, and to the brief abandonment of their settlement in 1610.
Moreover, many parts of North America do seem to have warmed in the wake of the Maunder Minimum, in the eighteenth century. This too could have profound consequences. In the seventeenth century, settlers to New France had been surprised to discover that their new colony was far colder than Europe at similar latitudes. They concluded that its heavy forest cover was to blame, and with good reason: forests do create cooler, cloudier microclimates. Just as the deforestation of New France started transforming, on a huge scale, the landscape of present-day Quebec, the Maunder Minimum ended. Settlers in New France concluded that they had civilized the climate of their colony, and they used this as part of their attempts to justify their dispossession of indigenous communities.
Despite eighteenth-century warming in parts of North America, the dates we assign to the Maunder Minimum do look increasingly problematic when we look beyond Europe. If we turn to China, we encounter a similar story. Much of China was actually bitterly cold in the 1630s and early 1640s, before the onset of the Maunder Minimum elsewhere. This, too, had important consequences for Chinese history. Cold weather and precipitation extremes ruined crops on a vast scale, contributing to crushing famines that caused particular distress in overpopulated regions. The ruling Ming Dynasty seemed to have lost the “mandate of heaven,” the divine sanction that, according to Confucian doctrine, kept the weather in check. Deeply corrupt, riven by factional politics, undermined by an obsolete examination system for aspiring bureaucrats, and scornful of martial culture, the regime could adequately address neither widespread starvation, nor the banditry it encouraged.
Climatic cooling caused even more severe deprivations in neighboring, militaristic Manchuria. There, the solution was clear: to invade China and plunder its wealth. The first Manchurian raid broke through the Great Wall in 1629, a warm year in other parts of the Northern Hemisphere. Ultimately, the Manchus capitalized on the struggle between Ming and bandit armies by seizing China and founding the Qing (or "Pure") Dynasty in 1644.
China under the Ming Dynasty was arguably the most powerful empire of its time. Even as it unravelled in the early seventeenth century, its cultural achievements were impressive, as this painting of fog makes clear. Source: Anonymous, "Peach Festival of the Queen Mother of the West," early 17th century.
This entire history of cooling and crisis predates the accepted starting date of the Maunder Minimum. Yet, the fall of the Ming Dynasty unfolded in one relatively small part of present-day China. Average temperatures in that region reached their lowest point in the 1640s. By contrast, average temperatures in the Northeast warmed by the middle of the seventeenth century. Average temperatures in the Northwest also warmed slightly during the mid-seventeenth century, and then cooled during the late Maunder Minimum.
Smoothed graphs that show fluctuations in average temperature across centuries or millennia give the impression that dating decade-scale warm or cold climatic regimes is an easy matter. Actually, attempts to precisely date the beginning and end of just about any recent climatic regime are sure to set off controversy. This is not only because global climate changes have different manifestations from region to region, but also because climate changes, as we have seen, involve much more than shifts in average annual temperature. Did the Maunder Minimum reach northern Europe, for instance, when average annual temperatures declined, when storminess increased, when annual precipitation rose or fell, or when weather became less predictable?
Historians such as Wolfgang Behringer have argued that, when dating climatic regimes, we should also consider the “subjective factor” of human reactions to weather. For historians, it makes little sense to date historical periods according to wholly natural developments that had little impact on human beings. Maybe historians of the Maunder Minimum should consider not when temperatures started declining, but rather when that decline was, for the first time, deep enough to trigger weather that profoundly altered human lives. When we consider climate changes in this way, we may be more inclined to subjectively date climatic regimes using extreme events, such as especially cold years, or particularly catastrophic storms. Dating climate changes with an eye to human consequences does take historians away from the statistical methods and conclusions pioneered by scientists, but it also draws them closer to the subjects of historical research.
In my work, I do my best to combine all of these definitions, and incorporate many of these complexities. I date climatic regimes by considering their cause – solar, volcanic, or perhaps human – and by working with statisticians who can tell me when a trend becomes significant. However, I also try to consider the many different kinds of weather associated with a climatic shift, and the consequences that extremes in such weather could have for human beings.
As you might expect, this is not always easy. I have long held that the Maunder Minimum, in the North Sea region, began around 1660. Increasingly, I find it easier to begin with the broadly accepted date of 1645, but distinguish between different phases of the Maunder Minimum. An earlier phase marked by cooling might have started in 1645, but a later phase marked by much more than cooling took hold around 1660.
These are messy issues that yield messy answers. Yet we must think deeply about these problems. Not only can such thinking affect how we make sense of the deep past, but it can also provide new perspectives on modern climate change. When did our current climate of anthropogenic warming really start? At what point did it start influencing human history, and where? What can that tell us about our future? These questions can yield insights on everything from the contribution of climate change to present-day conflicts, to the timing of our transition to a thoroughly unprecedented global climate, to the urgency of mitigating greenhouse gas emissions.
Behringer, Wolfgang. A Cultural History of Climate. Cambridge: Polity Press, 2010.
Brooke, John. Climate Change and the Course of Global History: A Rough Journey. Cambridge: Cambridge University Press, 2014.
Coates, Colin and Dagomar Degroot, “‘Les bois engendrent les frimas et les gelées:’ comprendre le climat en Nouvelle-France." Revue d'histoire de l'Amérique française 68:3-4 (2015): 197-219.
Dagomar Degroot, “‘Never such weather known in these seas:’ Climatic Fluctuations and the Anglo-Dutch Wars of the Seventeenth Century, 1652–1674.” Environment and History 20.2 (May 2014): 239-273.
Eddy, John A. “The Maunder Minimum.” Science 192:4245 (1976): 1189-1202.
Parker, Geoffrey. Global Crisis: War, Climate Change and Catastrophe in the Seventeenth Century. London: Yale University Press, 2013.
White, Sam. “Unpuzzling American Climate: New World Experience and the Foundations of a New Science.” Isis 106:3 (2015): 544-566.
Last year might have been the hottest year ever recorded by our instruments. Average global temperatures were at least 0.27° C warmer than the average between 1981 and 2010, which was in turn up from the preindustrial norm. Overall, the past 17 years have been very warm, and since 2002 temperatures have been consistently well above the 1981-2010 average. However, that consistency is not clearly reflected in Arctic sea ice trends. In fact, the winter extent of Arctic sea ice has expanded in the last two years, seemingly defying projections of its imminent collapse.
Arctic sea ice is extremely complex and comes in many forms that respond more or less aggressively to seasonal changes and temperature anomalies. Currents, wind patterns, and even subtle differences in Earth’s gravitation also influence sea ice extent, although temperature usually plays a dominant role. As a result, Arctic sea ice coverage rises in winter and falls in summer. Its minimum and maximum yearly extent reflect shifts in average annual temperature, and in turn climate change.
In the winter of 2010/11, Arctic sea ice reached its lowest-recorded extent (above). Satellite data reveals that, in December 2010, average Arctic sea ice covered just 12 million square kilometers. While that may sound like a lot, it is some 1.35 million square kilometers below the 1979-2000 average, and 270,000 square kilometers below the previous record low (set in 2006).
The sharp decline in Arctic sea ice coincided with very high global temperatures. In fact, scientists are still determining whether 2014 was actually warmer than 2010. In the wake of the winter of 2010/11, it seemed as though even the direst projections of Arctic sea ice decline had been too optimistic. Perhaps a threshold had been crossed, a tipping point had been reached, and Arctic sea ice would soon vanish.
However, since the winter 2010/11 Arctic sea ice extent has haltingly recovered. Satellite maps demonstrate that Arctic sea ice currently covers 12.52 million square kilometers, about 520,000 square kilometers more than the 2010/11 maximum (above). The greatest change relative to 2010/11 is in the Canadian Arctic and Subarctic, where the Hudson and Baffin Bays are now completely covered with ice.
If Arctic warming has persisted since 2010, why has Arctic sea ice recovered? One possible explanation lies in the recent history of the Arctic Oscillation (AO), a band of winds that circle the Arctic in a counter clockwise direction. When the AO is in a positive phase, its winds move quickly, tightly sealing frigid air in the Arctic. When it is in a negative phase, its winds move more slowly and the band is distorted, allowing Arctic air to descend towards lower latitudes. There appears to be a correlation between a negative AO and reductions in Arctic sea ice extent. The AO, which was in a strongly negative phase in 2010, is now apparently in a weakly positive setting.
Recent research also suggests that Arctic sea ice has a very low “memory” of previous trends. If, for example, Arctic sea ice extent is very low in September, winter heat loss is high, encouraging the formation of more sea ice. Such processes explain high year-to-year fluctuations in sea ice, yet they do not preclude long-term trends.
The apparent recovery of Arctic Sea Ice therefore does not counter long-term developments in either regional sea ice decline or global warming. Sea ice extent in December was still 540,000 kilometers below the 1981-2010 average, which means that sea ice coverage in the Arctic is still declining by 3.4%/decade. Most model simulations still project an accelerating decline in Arctic sea ice extent, even in optimistic scenarios in which our civilizations sharply reduce their greenhouse gas emissions (above).
Model simulations, scientific proxy data, and documentary evidence assessed by interdisciplinary scholars can contextualize sea ice in the modern Arctic in light of the distant past. My own recent research suggests that sea ice extent in the Arctic north of Europe during December 2014 is not dissimilar to what was encountered by European polar explorers during summer expeditions at the height of the Little Ice Age. This reflects climate change on a remarkable scale, given the vast annual difference between summer and winter sea ice coverage in the Far North.
For example, I traced sea ice recorded by Henry Hudson and his crew, during their first Arctic expedition. In the above map, the outbound journey is depicted with a black solid line, while the return journey portrayed in a blue, dashed line. The part of the voyage in which ice was sighted is in white; a solid white line for the outbound journey, and a dashed line for the return. Compare the summer sea ice sighted in the Hudson journey with the edge of winter sea ice today (the second map provided in this article).
Ultimately, Arctic sea ice fluctuates from year to year in ways that can temporarily mask gradual climate change. The world is warming, and the Arctic is warming faster than anywhere else. It is important to keep an eye on the recent recovery in Arctic sea ice, but all indications are that it is just a momentary reprieve in a very worrisome trend.
In Europe, the “Bronze Age” lasted nearly 2,000 years, from approximately 3200 BCE to roughly 600 BCE. In this period, bronze tools were forged for the first time, revolutionizing how Europeans manipulated their world and competed for resources. The first trading networks connected the continent, as navigational knowledge reached heights that Europeans would not exceed until the fifteenth century.
Centralized “palace economies” flourished throughout Europe and the Middle East, in ancient civilizations we remember today: on Minoan Crete, in Mycenaean Greece, in the Mesopotamian conquests of the Hittites and Akkadians, and of course in Egypt. Then, in the centuries around 1000 BCE, populations collapsed across Europe and the Middle East, sometimes in remarkably sudden events that must have been even more traumatic than the fall of the Roman Empire. In many regions, small, scattered villages were all that remained of the great Bronze Age civilizations. In Europe, it would be centuries before societies of similar complexity would rise again.
Those who study past climates are drawn to disaster, and not without reason. If we can establish that social crises coincided with periods of abrupt climate change, we can be pretty sure that further investigation will turn up connections between climate and human history. Historians, archaeologists, anthropologists, and scientists often find that connections between climate and human activity are particularly clear, and especially well-documented, in times of crisis. It is no surprise, then, that scholars have sought to link the Bronze Age collapse to climate change.
For example, while surveying 250,000 years of climate history, historian John Brooke of Ohio State University argues in an ambitious new book that the onset of a “cold, dry climate has to be a fundamental explanation of the demise of the Bronze Age of the greater Mediterranean.” (Brooke, 2014) Harvests failed in a changing climate, and subsequent food shortages undermined palace economies while provoking mass migration. Civilizations clashed, populations mingled and therefore spread disease, and piracy spread across the Mediterranean. Other scholars have tied roughly synchronous collapse in Northwestern Europe to changing climatic conditions. (Raftery, 1994; Tipping et al., 2008)
It is a compelling story, especially because it appears to offer a vivid warning for us today. However, like many straightforward narratives that tie climate change to historical collapse, that story is being revised by cutting-edge, interdisciplinary scholarship. In a paper recently published in Proceedings of the National Academy of Sciences, a team of scientists under lead author Ian Armit of the University of Bradford set out to reconstruct the late Bronze Age climate with unprecedented precision. Archaeological activity has surged across Ireland, offering abundant new sources for radiocarbon dating. Altogether, the researchers analyzed 2,023 radiocarbon dates in data from peat bogs and archaeological sites to build their new climate record.
They found that, in Northwestern Europe, populations began to decline more than a century before the late Bronze Age climate started to cool. Collapse in this part of Europe therefore cannot be tied to climate change. In fact, the authors argue that, all along, social and economic shifts were more than sufficient to explain the fall of regional Bronze Age civilizations. Trading networks and, in turn, stratified civilizations based around bronze production could not survive the advent of the Iron Age, when metals stronger than bronze were suddenly widely accessible.
Not surprisingly, this thesis is not quite as straightforward as the scientists suggest, because in many places people only gradually transitioned from bronze to iron. Nor does the climatic history of Northwestern Europe necessarily translate to southern Europe and the Middle East. Moreover, historians like Brooke have long acknowledged that climate change is but one possible explanation among many for the late Bronze Age collapse.
Ian Armit and his coauthors conclude that, in an age of global warming, “it is easy to view climate as the primary driver of past cultural change,” but “such assumptions need to be critically assessed using high-precision chronologies” that “guard against misleading correlations.” Sometimes historical work could use a little more methodological rigour, and certainly scientists, archaeologists, and historians should be prepared to work together in uncovering the climate history of the distant past.
However, at other times excellent historical work is grounded on cutting-edge scientific data that is revised by later studies. That can undermine some compelling narratives, but that does mean those narratives were never worth telling. Scholarship is a conversation, and that conversation gains depth through daring, provocative stories.
Armit, Ian et al., “Rapid climate change did not cause population collapse at the end of the European Bronze Age.” PNAS 111:48 (2014): 17045–17049.
Brooke, John L. Climate Change and the Course of Global History: A Rough Journey. Cambridge: Cambridge University Press, 2014.
Raftery, Barry. Pagan Celtic Ireland. The Enigma of the Irish Iron Age. London: Thames and Hudson, 1994.
Tipping, Richard et al., “Response to late Bronze Age climate change of farming communities in north-east Scotland.” Journal of Archaeological Science 35 (2008): 2379–2386.
Last month, world leaders met at UN Headquarters in New York City for Climate Summit 2014. As protests raged across the globe, diplomats established the framework for a major climate change agreement next year. The aim will be to limit anthropogenic warming to no more than 2 °C, a threshold established by scientists and policymakers, beyond which climate change is increasingly dangerous and unpredictable.
Just days after the 2014 summit, policy expert David Victor and influential astrophysicist Charles Kennel published an article in Nature that called on governments to “ditch the 2 °C warming goal.” Kennel and Victor argue that the rise in average global temperatures has stalled since 1998, as warming is increasingly absorbed by the world’s oceans. Variations in global temperature therefore do not directly reflect climate change, and governments should adopt other benchmarks for action. Atmospheric concentrations of carbon dioxide, they contend, more accurately reveal the relentless advance of climate change. In any case, limiting the rise in global temperatures to just 2 °C would impose unrealistic costs on national economies.
Not surprisingly, responses to Victor and Kennel have been swift and comprehensive. For example, physicist and oceanographer Stefan Rahmstorf argues that short-term temperature variability does not undermine the case for a 2 °C limit, especially when there is little evidence for a “pause” in global warming. He explains how scientists and policymakers selected the limit, and cites studies synthesized by the IPCC, which conclude that holding the rise in planetary temperatures to 2 °C would cost no more than 0.06% of the world’s annual GDP. Kevin Anderson, Deputy Director of the Tyndall Centre for Climate Change Research, claims that Victor and Kennel have confused the roles that should be pursued by scientists in international climate change negotiations. Like Rahmstorf, he maintains that the 2 °C limit is neither misplaced nor unachievable. As a climate change advisor to the British government, he explains that, “the UK, almost overnight, conjured up over £350b to bail out the banks and stimulate the economy – but it has earmarked just £3.8b for its Green investment bank!” Physicist Joe Romm argues that a new study, which finds that scientists may have underestimated the extent of global warming, only strengthens the case for a 2 °C limit. To their credit, Victor and Kennel wrote a lengthy response in the New York Times to these and other critiques.
Missing from this debate are perspectives from those who study the past: the ways in which natural climate change has actually influenced human history. This is unfortunate, because historical relationships between climate and society can yield important insights on the usefulness of a 2 °C limit.
Take, for example, the sixteenth and seventeenth centuries, when Europeans entered the Arctic and Subarctic as never before. Journeys of exploration gradually transformed scholarly understandings of the Far North and shaped popular attitudes towards nature and empire. They paved the way for new settlements and laid the groundwork for the exploitation of marine resources that would alter European diets, stimulate the continent’s northern economies, and transform Arctic environments. All this during an early modern “Little Ice Age” that cooled temperatures across the Arctic and Subarctic by at least 0.5 °C, relative to the twentieth-century norm.
This apparent paradox is a focus of my recent research. I have learned that it can be tempting to assume that global cooling or warming will have straightforward impacts at the regional or local level, but such assumptions are often wrong. It often feels as though climate history is the study of bewildering, sometimes infuriating complexity. I frequently find myself using eclectic sources to trace, for example, how changes in solar radiation altered global temperatures, regional cyclonic activity, a series of storms above a town, damage sustained in that town, and how people understood what was going on. This is a part of what makes climate historians so useful to the broader historical discipline: we are always coming up with new ways of understanding how the local reflects the global, of discerning how – and why - things change over time.
Lately, I have used cutting-edge scientific data to reinterpret journals written by Arctic and Subarctic explorers in the sixteenth and seventeenth centuries. I discovered that some expeditions to the Far North benefitted from unusually warm ocean currents and hot summers that actually reflected counter-intuitive links between local environments and the globally cool Little Ice Age.
I have also started to investigate the seventeenth-century rise of the Dutch and English whaling industry around the island of Spitsbergen in the seas north of Norway. It might seem obvious that Arctic whaling expeditions would suffer in colder decades, and indeed the pack ice between Spitsbergen and Greenland would expand as regional temperatures cooled. However, at the same time bowhead whales prized by hunters would congregate near the pack ice, which made them much easier to hunt. The whaling industry therefore enjoyed its best years during the coldest phases of the Little Ice Age.
In other words, my research has revealed that when our focus is strictly on warming or cooling trends, we can lose sight of how climatic shifts actually affect people.
Still, interdisciplinary scholars of past climates trace climate change by reconstructing variations in average temperature. We classify our climatic past according to these swings in average temperature, and how they influenced the advance and retreat of glaciers. Hence our (little) ice ages and warm periods, our minima, maxima, and anomalies. Archeologists, historians, and scientists of many stripes then investigate how humans and animals responded to particularly warm or cold periods. Of course, many continue to dig deeper, considering diverse weather patterns and reaching sometimes-surprising conclusions. Nevertheless, our initial focus on average temperatures usually shapes the kinds of questions we can ask.
Does that mean we miss the mark? Should we stop assuming that climate change and average temperature change are one and the same?
Perhaps not. In reconstructing past climates, scholars of past climates often find that while changes in average temperature do not tell the whole story, they can and should tell us where to start looking. Average temperatures are closely linked to changes in the solar energy earth receives and absorbs, which ultimately drives the environmental changes that reflect climate change. Shifts in regional precipitation, wind dynamics, or ice cover therefore usually respond to shifts in average regional temperature, which are closely correlated to fluctuations in average global temperature.
In that light, the 2 °C limit makes a lot of sense. A focus on average temperature might miss some of the complexity of climate change and its possible ramifications for our future, but changes in temperature are closely linked to the kinds of environmental conditions that Victor and Kennel would rather track separately.
Moreover, nuance is less important in climate change mitigation than it is for climate change adaptation. Greenhouse gas emissions need to decrease because temperatures can only increase so much before they imperil our civilization. The mechanisms and technologies for limiting emissions exist today; now is the time to implement them, rather than adjust our acceptable thresholds.
After all, the human history of past climate change also provides a warning. During the Little Ice Age, a moderate decline in average temperatures profoundly and often disastrously affected societies around the world. What will unprecedented warming do to us?
Note: originally posted on The Otter, blog of the Network in Canadian History and Environment.
Like the research that inspired it, this article is a cultural consequence of climate change.
Seven years ago, I was on a bus, reading a book about ancient climates. I looked out the window at a sunset so brilliant, it seemed to ignite Toronto's skyscrapers. I thought of global warming, and wondered: had anyone searched for connections between human history and climate change? Over the next seven years I found out that they had, but that there was still plenty of room for a new perspective.
The book I was reading was the product of an academic culture increasingly affected by the growing manifestations of global warming. Years ago, its importance to me was shaped by my place among a tangle of different cultures that all included discourses about climate change. The sunset that helped me imagine new connections between ideas triggered by these cultures led to a dissertation, which explored the climate history of the Dutch Golden Age. Now completed, the dissertation reveals, in part, that culture is inextricable from the material influence of climate change.
Simply put: we cannot comprehend the human consequences of climate change, past, present, and future, without understanding culture.
That insight was central to the papers of my panel at the World Congress of Environmental History, which recently concluded in beautiful Guimarães, Portugal. Panelists discussed how climate change can upend delicate relationships between humans and local environments, in ways that ultimately influence culture.
Thanks to funding generously provided by the Network in Canadian History and Environment, I was there to chart the ways in which a cooler early modern climate, known as the “Little Ice Age,” influenced the famous Dutch culture of the seventeenth century. My paper drew both from the last chapter of my dissertation, and from my more recent articles on Arctic environmental history.
Strangely, projected relationships between climate and culture rarely feature in the reports regularly published by the Intergovernmental Panel on Climate Change or the World Meteorological Organization. Moreover, scholars of past climates who consider cultural consequences have too often assumed that a worsening climate inspired melancholic cultural responses. Those narratives are easily dismissed by cultural historians who can readily find alternative explanations for changing artistic tastes, or shifting patterns of gendered persecution, during even the coldest decades of the Little Ice Age.
Because the Dutch Republic flourished during the nadir of the Little Ice Age, examining its richly documented culture provides a rare opportunity to refine narratives that connect climate change to culture. My paper argued that literate Dutch observers, writing within a maritime culture that produced detailed records of daily weather, discerned changes in prevailing patterns of extreme weather in the seventeenth century. This partial understanding of climate change might have informed cultural responses, but I believe that we must be careful in making these connections.
Take the famous Dutch “winter landscapes” of the sixteenth and seventeenth centuries. The rise of these paintings during the onset of a particularly cold phase of the Little Ice Age appears, at first, like an especially direct cultural consequence of climate change. Certainly many scholars of past climates have argued as much. However, on closer inspection, connections between climate and culture are not so straightforward. The painters of winter landscapes often painted indoors, during years that were not especially cold, and their paintings were often heavy with mythology and allegory. Certainly they did not directly reflect contemporary weather or climate. Moreover, winter landscapes were often part of a series, which included depictions of other seasons. Finally, they were attuned to a market that had dried up by the late seventeenth century, another period of extreme cold that was nevertheless not accompanied by paintings of winter landscapes.
The Dutch example therefore reveals that, in order to link climate change to cultural responses, we must carefully establish relationships between climate, weather, individuals, markets, and more. Scholars who examine the human consequences of climate change must range across so many disciplines that making assumptions can be very tempting. However, too easily connecting climate to culture can undermine other conclusions founded on more sturdy ground.
Ultimately, there were some elements of Dutch Golden Age culture that probably reflected the influence of climate change. Among them: poems and illustrations that responded to distinct weather events rendered more frequent during the coldest (or warmest) decades of the Little Ice Age. Technologies like new heating devices and “ice wagons” that travelled speedily across the ice likely also reflected the cultural influence of a cooler climate. So too did the uniquely egalitarian cultures that emerged from winter carnivals, which were held on ice that was more extensive and lasted longer in the chilliest phases of the Little Ice Age.
Overall, concrete cultural responses to climate change in the Dutch Republic were consequences of, or contributed to, the broader societal resilience of the Dutch to the Little Ice Age. At the WCEH, my paper and panel demonstrated once again that relationships between climate change and humanity are inexplicable without a rigorous analysis of culture.
According to the most recent summary for policymakers published by the Intergovernmental Panel on Climate Change (IPCC), “climate change can indirectly increase risks of violent conflicts” by exacerbating the socially destabilizing influence of poverty and economic shocks. While the IPCC attaches “medium confidence” to this claim, it is hardly controversial. Similar conclusions were made in the IPCC’s 2007 assessment reports. Since then, several studies have established that warfare is correlated to climatic stress, although their methods ignore social and cultural contexts. Many of the world’s most advanced militaries are now at the forefront of state adaptation to global warming. The American military, for example, is not only curbing its greenhouse emissions, but is also actively preparing for conflict stimulated by future climate change.
But how is the conduct of war – not just its origins – actually influenced by climate change? In the latest issue of the journal Environment and History, I published an article that explores this question. In the seventeenth century, three wars between England and the Dutch Republic – then the leading maritime powers of their day – were fought during the onset of an especially chilly stretch of the Little Ice Age in Europe. In my article, I argue that the weather that accompanied the coming of this “Maunder Minimum” affected military operations during the wars in complex and often counter-intuitive ways.
The First Anglo-Dutch War, contested between 1652 and 1654, actually preceded the cooling of the Maunder Minimum. I used ship logbooks, correspondence, intelligence reports, and diary entries written during the war to demonstrate that frequent westerly winds can be associated with warmer temperatures during the early 1650s. That usually allowed English fleets sailing from the west to claim the “weather gage,” the windward position from the enemy that, in naval combat, granted initiative in attack and, occasionally, retreat.
The English navy had developed revolutionary tactics in which ships of great size would bombard enemy hulls while sailing past them in line formation. By contrast, Dutch tactics still mandated grappling, boarding, and firing at enemy rigging (ironic, since a Dutch admiral had first debuted “line of battle” tactics some 15 years earlier). English tactics required favourable winds, and English fleets got them in the first Anglo-Dutch War. The Dutch Republic was rich enough to survive several naval reversals, and its shipyards productive enough to stave off defeat. However, on the balance the First Anglo-Dutch War was far more costly for the Dutch than it was for the English.
Human and environmental structures had shifted by the onset of the Second Anglo-Dutch War in 1664. Seasonal temperatures were more variable but generally cooler, storms had become more frequent and more severe, and easterly winds had grown more common. Meanwhile, the Dutch had adopted many of the most effective elements of the English naval system. Dutch fleets sailing to battle from the east now did so with the weather gage, and they were often victorious. Moreover, because English vessels had three tiers of guns while the Dutch only had two, many English guns were located near the water, and had to be retracted in high winds that were more common in a cooler climate. Easterly winds also allowed the Dutch fleet to raid up the Medway River in 1667, forcing the English crown into a peace that clearly benefitted the Dutch.
In the Third Anglo-Dutch War, the climate of the Maunder Minimum manifested in weather that was defined less by easterly winds than incessant storminess. This time, the Dutch Republic was invaded by French and German armies while besieged at sea by a united Anglo-French fleet. However, in the summer of 1672 relentless gales kept the allied fleet from supporting a naval invasion, just as the Dutch fleet was partially disbanded so its soldiers and artillery could defend against invasion on land. Thereafter, Dutch admiral Michiel de Ruyter conducted a remarkably successful guerrilla campaign, aided by frequent easterly winds. The Dutch Republic survived its greatest crisis of the seventeenth century, and England signed another concessionary peace in 1674.
So, what does this seventeenth-century story tell us about war and climate change today?
First, it demonstrates again that climate change is mediated by human decisions, institutions, and cultures. The Dutch Admiralties might not have prevailed in the second and third wars had they not learned from the success of the English, who might have won the third war were it not for the leadership of De Ruyter. Second, the article reveals that military operations are influenced by short-term weather, which is often but certainly not inevitably affected by long-term climate change. The distinction is important, because the weather that most influences a battle can actually be an exception to the climatic trend.
Ultimately, far more studies are required that explore not only how climate change contributes to the cause of war, but also how it shapes wars once they begin.
Note: this article is a greatly condensed version of dissertation chapters that also examine how climate influenced weather that affected shipbuilding, marine intelligence networks, privateering, and warfare on land during the Anglo-Dutch Wars.
Dagomar Degroot, “‘Never such weather known in these seas:’ Climatic Fluctuations and the Anglo-Dutch Wars of the Seventeenth Century, 1652–1674.” Environment and History 20:2 (May 2014): 239-273.
United States Department of Defense, Quadrennial Defense Review Report. February 2010.