Dr. Kent Linthicum, Arizona State University
The recent bicentenary of the Year without a Summer (1816) has brought that unusual intersection of geological forces, changing climate, and human history into focus again. The radical cooling brought on by Tambora’s eruption seems especially significant as modern societies face their own dramatic climate change, albeit in the form of radical warming brought on by industrialization.
Tambora’s eruption in 1815 is the most recent seven on the Volcanic Explosivity Index (VEI). The VEI scale rates eruptions from zero to eight. VEI sevens erupt roughly one-hundred cubic kilometers of material and occur infrequently: the next most recent seven, after Tambora, erupted in 1257. The large amount of ejected material from Tambora’s eruption cooled Europe by 1-2 degrees Celsius on average. The cooling caused the subsequent summer of 1816 to be so cold that it was hardly a summer at all. In an era of increasingly warm summers, a cooler one might sound ideal, but chilly weather led to a food shortages and starvation throughout the northern hemisphere.
Between April and May 1816, "Bread or Blood" riots erupted across East Anglia as the price of bread surpassed the wages of agricultural and industrial laborers. While food riots had a long history in Britain, industrialization, enclosure, and globalization increasingly safeguarded the nation's food supply by the early nineteenth century.
The Bread or Blood riots reveal that climatic shocks could still provoke famine and rioting in the nineteenth century, even in the country that should have been least vulnerable to them. They also show that contemporary media depicted the rioters with disdain, in ways that probably worsened official responses to them.
At the close of 1815, the United Kingdom had ended its wars with France, yet it embarked on a long struggle with disastrous weather. After an “extremely changeable” January, February was “unseasonably warm and moist,” lifting hopes that the season's crops might recover. Yet The Observer reported that both industrial and agricultural laborers were in “extreme distress” already.
By early May, a “Monthly Agricultural Report” in The Observer explained that conditions had not improved because “sun and warm weather are the great wants.” Prices were on the rise because of increased demand, speculation, and poor harvests throughout Europe. East Anglia experienced a roughly 33% increase in the price of wheat between March and May. Laborers were incapable of affording the prices of food and became desperate. They needed to eat but had no money, so protest became their only option.
In the frigid spring of 1816, riots broke out around East Anglia. One of the first instances was on April 17th when a crowd assembled in Gedding and smashed some farming equipment. After that Wattisham, Hitcham, and Rattlesden experienced disturbances on April 24th; Needham Market and Swaffham Bulbeck on May 7th; Bury St. Edmunds on May 14th; Brandon on May 16-18th; Norwich on May 16-20th; Hockwold on May 17th; Feltwell on May 18th; Hockham on May 19th; Downham Market on May 20-21st; and finally Littleport and Ely on May 21-24th. On May 23rd soldiers and local militia arrived in Ely, and between then and the 24th, they forcefully suppressed the rioting. Despite the military presence, some rioting continued in East Anglia, but the Littleport and Ely riots were successfully subdued.
While the protestors had many reasons for agitating, their core motivation was survival. They demanded either food, money, a reduction in food prices, or all of the above. In Brandon, the protestors called for “Cheap Bread, a Cheap Loaf and Provisions Cheaper.” A woman at the protest reportedly demanded “Bread or Blood in Brandon this day.” One man admitted that the protestors “did not mean any injury but he could not live with his large family as things were, and they must have flour cheaper.” As many of the protestors were agricultural laborers, they broke agricultural machinery, presumably with the goal of taking back those jobs that the machinery would have eliminated.
The protesters felt they had no choice: they would have food or violence, because either way their deaths were imminent. William Dawson of Outwell, when asked why he was agitating, is reported to have said, “Here I am […] between Earth and Sky—so help me God. I would sooner loose [sic] my life than go home as I am. Bread I want and Bread I will have.” For the protesters, causing a disturbance was the only way to ameliorate their suffering. Yet not everyone perceived the disturbances as the desperate attempts of the poor to find respite from coming starvation. Some saw the riots as evidence of the moral failings of the lower classes.
“Economical humbug of 1816 or, saveing at the spiggot & letting out at the bunghole" (April 1816) by George Cruikshank. Here Cruikshank criticizes the government for what he perceives as an imbalance in spending. The Regent, Princess Charlotte, Lord Castlereagh and others are stealing public money for their own wants and desires, with very little money going towards “Public Service.”
The Times reported on the disturbances on May 21st, noting that the sheriff of Suffolk had arrived in London to request government aid to “restore tranquility.” The first disturbances, according to The Times, had been incited by “malicious [...] agents” who were likely “agricultural labourers.” While the paper acknowledged that the protesters demanded “a reduction in the price of bread and meat,” it still suggested that their protests had been illegitimate.
When the protests broke out again, The Times depicted the protesters as criminals and revolutionaries. They had apparently attacked the “houses of those persons who were obnoxious to them.” Protesters in one group carried a flag inscribed with “Bread or Blood” and spears. They “threatened to march to London.”
The Times reported on the 25th "that the disturbances in Norfolk and Suffolk are by no means at an end.” The paper detailed the movement of troops, and related a short narrative about a few magistrates who realized that the laborers’ wages were too low and raised them. This caused The Times to ardently hope that the changes made by these magistrates in Downham were “proof of considerate attention to the complaints of the lower classes [and] will excite a correspondent gratitude in the minds of the latter, and induce them to return to habits of peaceful industry and order.” The suggestion by the paper was that the onus was on the laborers to stop protesting because a few officials had responded to their concerns. In other words, the laborers should just wait, because the government would come to their aid.
A long article on May 27th dove into the economics of the issue. The Times weighed whether the government should step in to support local agriculture when manufacturers in the country were not interested in the product. The paper concluded that government should not intervene, and suggested that protestors are merely using the current high prices as a “pretense” for violence. The paper brushed off the concerns of the protestors in East Anglia, again suggesting that they were rioting for malicious reasons rather than desperation. The final report, on the 30th, reported the disturbances had ceased, thanks to the efforts of soldiers and the local militia.
The Times placed the blame for “much of the disorderly conduct” on the poor laws, a system of welfare for impoverished people in the United Kingdom. The paper suggested that the laws had led the poor to expect handouts, and when they did not get what they wanted they became unruly. The rioters were brought to trial between June 17th and 22nd. In the end five people were executed, five exiled to Australia for life, four exiled for a shorter sentence, and ten imprisoned for twelve months. Food prices remained high in England until 1820.
“The Elgin Marbles! or John Bull buying stones at the time his numerous family want bread!!” (June 1816) by George Cruikshank. Cruikshank criticizes the government again for spending money contrary to the public good. In this case purchasing the controversial Elgin Marbles from Lord Elgin. Screaming children in the image implore John Bull (a national personification of Great Britain like Johnny Canuck or Uncle Sam) saying "Don't buy them Daddy! we don't want Stones. Give us Bread! Give us Bread! Give us Bread!".
Humanity has long endured changes in Earth's climate. Today, many people in the developed world can, for the moment, insulate themselves from the worst consequences of a changing climate. Yet millions in the developing world especially do not have that luxury. The media can either encourage or discourage action to address their suffering.
In 1816,The Times’ reporting of the Bread or Blood riots reinforced the idea that the protesters were criminals and malcontents, that their demands were inappropriate or untimely. That reporting would only bolster the biases of those in control. So despite a compromise written up by the Ely magistrates to increase wages depending on the price of flour and the size of the laborer’s family on May 23rd, on May 25th Lord Sidmouth placed a one-hundred-pound bounty on those “unlawfully assembled” in the region.
The Bread or Blood riots are a reminder that climate insecurity has been the rule and not the exception in human history. Newspaper accounts of the riots reveal that the media not only described events but also helped shape them in ways that exacerbated the worst effects of climate change for the most vulnerable. Today, media depictions of citizens furious about their lack of clean food or water, protestors enraged by the seizure and pollution of their homes, and refugees displaced by drought and violence can similarly worsen the social consequences of global warming. We must have a media that fairly describes the impacts of climate on people around the world, and we must keep a critical eye on media in order to adapt to and perhaps mitigate climate change.
“Disturbances in Norfolk And Suffolk.” The Times, May 23, 1816, pp. 3. The Times Digital Archive.
“London, Saturday, May 25, 1816.” The Times, May 25, 1816, pp. 3. The Times Digital Archive.
“London, Monday, May 27, 1816.”" The Times, May 27, 1816, pp. 3. The Times Digital Archive.
“London, Thursday, May 30, 1816.” The Times, May 30, 1816, pp. 2. The Times Digital Archive.
“Monthly Agriculral Report.” The Observer, Feb 04, 1816, pp. 4, ProQuest Historical Newspapers: The Guardian and The Observer.
“Monthly Agricultural Report.” The Observer, May 05, 1816, pp. 4, ProQuest Historical Newspapers: The Guardian and The Observer.
Oppenheimer, Clive. "Climatic, Environmental and Human Consequences of the Largest Known Historic Eruption: Tambora Volcano (Indonesia) 1815." Progress in Physical Geography, vol. 27, no. 2, 2003, pp. 230-259, doi:10.1191/0309133303pp379ra.
Peacock, Alfred James. Bread or Blood: a Study of the Agrarian Riots in East Anglia in 1816.
Victor Gollancz, 1965.
Post, John D. The Last Great Subsistence Crisis in the Western World. Johns Hopkins University Press, 1977.
“Riots in Suffolk” The Times, May 21, 1816, pp. 3. The Times Digital Archive.
“Tambora.” Global Volcanism Program, Smithsonian Institution, 2013. volcano.si.edu/volcano.cfm?vn=264040
Ward, Peter L. “Sulfur Dioxide Initiates Global Climate Change in Four Ways.” Thin Solid Films, vol. 517, no. 11, 2009, pp. 3188-3203, doi:10.1016/j.tsf.2009.01.005.
“Yesterday the Princess Charlotte and her husband received congratulatory addresses from Salisbury and.” The Times, May 23, 1816, pp. 3. The Times Digital Archive.
Dr. Ruth Morgan, Monash University
Non-tabular iceberg off Elephant Island in the Southern Ocean. Source: Andrew Shiva, Wikipedia.
Ice, or a lack of it, is an “icon” of anthropogenic climate change. Earlier this year, researchers reported that a rift in Antarctica’s fourth-largest ice shelf has accelerated and could soon cause a vast iceberg to fall into the sea. After the collapse of the ice shelf, the glaciers that once sustained it will run into the sea. Glaciers like these, Mark Carey has observed, have become an “endangered species” of the Anthropocene. Yet only a few decades ago, Antarctic ice was the hero in a visionary episode of the planet’s recent “cryo-history”.
In October 1977, scientists met at Iowa State University to discuss the latest findings in the emerging field of “iceberg utilization”. Eager to promote the cause was conference co-sponsor Prince Mohammed al-Faisal of Saudi Arabia, who flew an iceberg weighing over two tonnes from the Portage Glacier Field near Anchorage, Alaska to Ames, Iowa for the occasion – producing at least 7 tonnes of carbon dioxide over the 5,000km journey. One local couple, who brought with them plastic bags, a bucket, and an ice-pick to the iceberg’s unveiling, told the New York Times, “I don’t know what we’ll do with it – serve it in drinks, I guess. We’ll have a cocktail party”.
A series of US television news features documenting the Iceberg Utilization Conference, October 1977. Source: YouTube / Special Collections and University Archives, Iowa State University.
These stunts amused onlookers, but they were no laughing matter for the researchers studying the possibility of towing Antarctic icebergs to arid and semi-arid climes. Iceberg utilization was a tantalizing prospect for solving one of the world’s pressing problems: global water shortages. In their controversial study The Limits to Growth, the interdisciplinary research group the Club of Rome had earlier warned that the availability of fresh water was a limit to growth that “will be reached long before the land limit becomes apparent”. Bolstering this neo-Malthusian prediction were the widely reported droughts in the Sahel, the Ukraine, and the failure of the Indian Monsoon during the early 1970s.
An excerpt from the public affairs program, Dimension 5, which aired on WOI-TV in central Iowa, USA, October 1977. Panellists include Prince Mohamed Al Faisal of Saudi Arabia, Henri Bader, Daniel J. Zaffarano, Richard L. Cameron, and Ed Cronick. Source: Youtube / Special Collections and University Archives, Iowa State University.)
These anxieties were the focus of the 1977 United Nations Conference on Water in Mar del Plata, Argentina, where fresh water was declared a “scarce asset” that demanded coordinated resource development and management. Among the options discussed to increase water supplies were so-called “complex technologies” and “non-conventional methods”, such as seawater desalination. By the late 1970s desalination was already well established in Kuwait, and Saudi Arabia was eager to replicate its neighbour’s success. Leading this mission (at least until Antarctic icebergs beckoned) was the head of the Saudi Saline Water Conversion Corporation: Prince Mohamed al-Faisal. He shared his vision with the Christian Science Monitor, “Over a period, we would hope to change the vegetation and climate in some coastal areas”.
The Prince’s idea was several decades in the making. The prospect of using icebergs to modify local climates and to provide endless water supplies to the world’s thirstiest regions had emerged in the decade after the Second World War. In a 1949 class at the Scripps Institution of Oceanography in California, oceanographer John Isaacs had speculated on the subject, and later expanded on his thinking in the February 1956 issue of Science Digest. He proposed floating an Antarctic iceberg along the Humboldt Current to the coast of southern California from where it could supply water to Los Angeles.
The feasibility of such a scheme had been confirmed in 1969, when glaciologist Willy Weeks and geophysicist Bill Campbell surprised even themselves when they concluded that towing icebergs to arid lands was “within the reach of existing technology”. They based their calculations on a large tabular iceberg that was twice the size of the Great Pyramid of Giza, which was less likely to roll in transit and more likely to be found near the Antarctic than the Arctic. The optimum routes for towing such an iceberg, they suggested, were from the Amery Ice Shelf to southwestern Australia and from the Ross Ice Shelf to the Atacama Desert.
“Optimum towing paths between the Amery Ice Shelf and Australia and the Ross Ice Shelf and the Atacama Desert.” Fig. 8, Weeks and Campbell, 1973, p. 220.
In 1973, the National Science Foundation and the Rand Corporation sponsored a subsequent report on the feasibility of southern California for such a scheme. Antarctic icebergs could supply water for urban, industrial and agricultural demands, while helping to abate the growing thermal pollution of the industrialized region. According to their estimates, towing an iceberg from the Ross Sea to the Pacific southwest would be significantly cheaper than inter-basin water transfers and desalination. Furthermore, nuclear energy could be used, which would alleviate the need to use fossil fuels during a decade of uncertain oil supplies.
The possibility of endless water supplies was too good to ignore and the Saudi prince assembled experts from around the world to advance the field of “iceberg utilization”. His 1977 conference in Iowa attracted scientists from arid and semi-arid countries such as Egypt, Greece and Libya, as well as nations with polar territories, such as Australia, Chile and Canada. Nearly three quarters of the attendees were from the United States, most of whom were associated with the military-industrial-academic complex. They included researchers from the Jet Propulsion Laboratory, Tetra Tech International, the Lawrence Berkeley Laboratory, the US Army Cold Regions Research and Engineering Laboratory, and the Naval Weapon Centre.
The lone woman speaking at the conference was the pioneering meteorologist, Joanne Simpson from the University of Virginia, Charlottesville. Simpson had been director of the experimental meteorology laboratory of the National Oceanic and Atmospheric Administration and member of the Weather Modification Advisory Board. Two decades of studying the intersections of cloud physics with hurricane research informed her comparison of Antarctic icebergs to cloudseeding, as well as her study of the atmospheric impacts of iceberg utilization. Although towing an iceberg would cost more than cloudseeding, she estimated that its meltwater would more than make up for the expense. In icebergs, Simpson also saw a means to mitigate the toll of tropical hurricanes. Using an iceberg to lower the surface temperature of the ocean ahead of an advancing hurricane would help to reduce the destructive winds of the hurricane.
“Illustration of possible new approach to the hurricane mitigation aspect of weather modification. Hurricanes are known to diminish in strength when they move over cooler water, here shown hypothetically to be supplied by a melting iceberg.” Source: Fig. 5, in Simpson, 1978, p. 865. Artist: Tom Henderson.
Simpson was well aware of the credibility gap that such endeavours faced. In 1978 she wrote, “For meteorology as a whole, public overheated controversy on weather modification gives the entire profession an image of ridiculous bumblers or even charlatans”. But the opportunity to “serve humanity” outweighed these concerns and she welcomed alternative modification methods.
Despite the promise of iceberg utilization, its potential impact on local climates became one of the many reasons why the vision did not become a reality. In Australia, for instance, enthusiastic plans for the continent’s southwest were rejected in the mid-1980s on the grounds that an iceberg “parked offshore for several years” might affect the regional climate in unexpected and unwanted ways. Peter Schwerdtfeger, the scheme’s Australian proponent, lamented that its feasibility lay not in science and technology, but in “politically and economically based decisions”. He remained confident, however, that iceberg utilisation would occur when “individual nations recognise their obligations to the more thirsty segment of mankind” and choose to exploit the Antarctic icebergs that otherwise “melt pointlessly in the Southern Ocean”. According to this logic, the failure to take advantage of the icebergs was tantamount to wasting precious water resources.
The possibility of iceberg utilization was one of many post-war technological visions. The futurism and science fiction of the atomic age urged the exploration and exploitation of new planetary frontiers such as the deep ocean and outer space. In the Cold War context, measuring, monitoring and manipulating the physical environment on a global scale had the potential to fulfil both military and peaceful ambitions. The iceberg “visioneers” were bit players in a wider debate about the Earth’s future, one that pitted the constraints of ecological limits against the possibilities of technological innovation. Just as the atom offered an inexhaustible source of cheap energy, Antarctica was a cornucopia of renewable fresh water simply awaiting the application of human ingenuity. Four decades later, we are searching for ways to keep that water well and truly locked up.
Al-Nakib, Farah, Kuwait Transformed: A History of Oil and Urban Life (Palo Alto, CA: Stanford University Press, 2016).
Behrman, Daniel with John D. Isaacs, John Isaacs and His Oceans (Washington, DC.: ICSU Press, 1992).
Carey, Mark, “The History of Ice: How Glaciers Became an Endangered Species,” Environmental History 12 (2007): 497-527.
Carey, Mark, M. Jackson, Alessandro Antonello and Jaclyn Rushing, “Glaciers, Gender, and Science: A Feminist Glaciology Framework for Global Environmental Change Research,” Progress in Human Geography 40, no. 6 (2016): 770-93.
Fleming, James R., Fixing the Sky: The Checkered History of Weather and Climate Control (New York: Columbia University Press, 2010).
Gosnell, Mariana, Ice: The Nature, the History, and the Uses of an Astonishing Substance (Chicago: University of Chicago Press, 2005).
Hamblin, Jacob Darwin, Arming Mother Nature: The Birth of Catastrophic Environmentalism (New York: Oxford University Press, 2013).
Harper, Kristine C., Make it Rain: State Control of the Atmosphere in Twentieth-Century America (Chicago: University of Chicago Press, 2017).
Hult, J.L. and N.C. Ostrander, Antarctic Icebergs as a Global Fresh Water Resource (Santa Monica, CA: Rand, 1973).
Husseiny, A.A. (ed.), Iceberg Utilization: Proceedings of the First International Conference and Workshops on Iceberg Utilization for Fresh Water Production, Weather Modification, and Other Applications, held at Iowa State University, Ames, Iowa, USA, October 2-6, 1977 (New York: Pergamon Press, 1978).
Jones, Toby Craig, Desert Kingdom: How Oil and Water Forged Modern Saudi Arabia (Cambridge, MA: Harvard University Press, 2010).
Leslie, Stuart W., The Cold War and American Science: The Military-Industrial-Academic Complex at MIT and Stanford (New York: Columbia University Press, 1993).
McCray, W. Patrick, The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies and a Limitless Future (Princeton: Princeton University Press, 2013).
Rozwadowski, Helen M., “Arthur C. Clarke and the Limitations of the Ocean as a Frontier,” Environmental History (2012): 1-25.
Sabin, Paul, The Bet: Paul Ehrlich, Julian Simon, and Our Gamble over Earth’s Future (New Haven, CT: Yale University Press, 2013).
Schmidt, Jeremy J., Water: Abundance, Scarcity, and Security in the Age of Humanity (New York: NYU Press, 2017).
Schwerdtfeger, Peter, “The Development of Iceberg Research and Potential Applications,” Polar Geography and Geology 9, no. 3 (1985): 202-209.
Simpson, Joanne, “What Weather Modification Needs – A Scientist’s View,” Journal of Applied Meteorology 17 (1978): 858-66.
Sörlin, Sverker, “Cryo-History,” in The New Arctic, (eds.) Birgitta Evengård, Joan Nymand Larsen and Øyvind Paasche (New York: Springer, 2015), pp. 327-39.
Weeks, Wilford J. and William J. Campbell, “Icebergs as a Freshwater Source: An Appraisal,” Journal of Glaciology 12, no. 65 (1973): 207-33.
Patrick Gage, Georgetown University
People care about climate change when it affects them. That is why Pacific islanders fear rising sea levels more than the average American, and why many who live in coastal cities fear a projected increase in tropical cyclones more than those further inland. Yet the idea that an environmental change “over there” will not affect communities “here” actually makes little sense. History is rife with examples of human crises brought on by seemingly distant climatic events.
One of the clearest examples unfolded in late nineteenth century Northeastern Brazil (Nordeste). A powerful El Niño-Southern Oscillation (ENSO) event warmed the waters of the equatorial Pacific Ocean, changing atmospheric circulation in ways that brought extreme rain shortages to Brazil, and ultimately launched the nation’s first rubber boom. The Grande Seca, or “Great Drought,” of 1877-1878 not only killed hundreds of thousands of northeasterners (nordestinos), but also sparked massive internal migration. The latter proved particularly problematic for the state of Ceará, from which thousands emigrated. Cearenses thus provided rubber barons in nearby Amazonas and Pará an invaluable supply of cheap labor, which they needed to meet growing demand. By 1900, the country exported more rubber than any other commodity except coffee. El Niño therefore shaped the history of Brazil.
ENSO events affect the global environment on an irregular basis. Typically, Peru’s cold Humboldt Current flows northward along the South American coast before easterly trade winds push it west along the equator. Warmed by the sun, its waters increase in temperature as they approach Indonesia, making the western Pacific hotter than the east. El Niño reverses these trends: trade winds and the Humboldt’s westward flow subside, westerly winds pick up, Kelvin waves carry warm water from Asia to South America in a process called “advection,” and hot, humid air masses travel toward Peru and Ecuador. Sea temperature in the eastern equatorial Pacific subsequently rises, causing changes in precipitation across the Americas. While coastal Peru faces torrential rain, Brazil’s Nordeste experiences severe drought. The distrinct relationships, or teleconnections, between ENSO and local climates generate different phenomena depending on the region. When Western Canadians enjoy an unusually warm winter, for example, Western Europeans may endure an especially cold one.
El Niño and drought in Northeastern Brazil therefore often coincide, but not always. The Brazilian Northeast has struggled with intermittent drought for centuries. Although its sugar- and cotton-heavy coast generally receives sufficient rain, the region suffered no fewer than forty-four unique dry spells between 1557 and 1992, or approximately one every ten years. Removing an abnormally wet period from 1615-1691 reduces that average to once per eight. What is more, of the fifteen so-called “major” droughts—those spanning at least two consecutive summers—only six occurred before 1800, implying a quantitative and qualitative increase over the past 200 years. While some of these dry spells occurred in concert with ENSO, many did not. Water shortages plague the Nordeste regardless of ocean temperature.
Different droughts affected the water-dependent Northeast differently. Though many were forgotten, some left indelible marks, none more than the Grande Seca. From 1877 to 1878, two “very strong” El Niño years dramatically increased water shortages and decimated the Nordeste, killing livestock and people by the tens of thousands. Ceará suffered most. As cattle and crop losses wiped out food supplies, the state’s death toll mounted. By 1878, 175,000 Cearenses had perished. All told, at least 500,000 nordestinos died and three million fled their homes. Newspapers from Ceará described the tragedy in heart-wrenching detail.
On 6 January 1877 (mid-summer), Cearense noted the first signs of hardship: “The lack of rains is already being felt. From Sobral and other … points of the province they tell us … the drought is … causing considerable damage.” Desperate letters painted a dismal picture. On 11 March, one man in Crato wrote: “We are with a terrible drought … and only God knows how painful this scourge will be.” Relayed another from Caixoçó: “The drought is ravaging everything, the mortality of cows is astonishing.”
The situation did not improve as March and the late rainy season became early winter. One correspondent from Assaré feared complete human annihilation in the surrounding countryside, while O Retirante (“The Refugee”) lamented the “emaciated bodies of our little children, wives and fathers.” A letter published several days before Christmas ended 1877 on a depressing note: “Already we are in the middle of December and not any rain! The drought with all its procession of horrors proceeds, threatening to swallow everything.”
The Grande Seca officially ended in 1878, but its effects lasted far longer. The drought crippled Northeastern sugar barons, who had watched their investments wither since the early 1800s. Cotton growers, whose business boomed during and after the American Civil War (1861-1865), likewise faced renewed headwinds, while cattle ranchers counted their losses in the hundreds of thousands of heads. The deadliest drought in Brazilian history, exacerbated by two consecutive years of exceptionally strong El Niño, therefore had a significant economic impact on the Nordeste, draining it of much-needed capital and contributing to the region’s lackluster development.
Above all, drought victims needed jobs, especially in Ceará. As an 11 March 1877 letter from Icó indicated, people often died “not because there [was] an absolute lack of foods, but because there [was] nothing with which to buy them.” Millions of desperate Cearenses therefore migrated to major population centers, hoping to find work. Among emigrants’ limited options, Brazil’s burgeoning rubber industry proved particularly appealing, both for its relatively high wages and geographical proximity.
Based in the Amazon Valley, namely the states of Amazonas and Pará, Brazilian rubber production did not begin until the late 1700s, after French explorer Charles Marie de La Condamine first watched natives use a “milky, viscous liquid” from the Hevea Braziliensis tree to make boots, toys, and bottles. Fueled by what amounted to a minor “gold rush,” exports of raw rubber and rubber products grew steadily through the early 1800s. The trade took off when Charles Goodyear discovered vulcanization in 1839, which made rubber resistant to extreme temperatures. Exports jumped from 388,260 kg in 1840 to 2,673,000 kg in 1860. Nevertheless, rubber remained largely irrelevant in Brazil until its first boom in the 1880s, when price increases and an influx of cheap labor pushed the commodity’s export share to 10 percent. That number soared to 39 percent by 1910. Brazil’s natural claim to Hevea made it the world’s largest producer for three decades
Despite remarkable success, Brazilian rubber barons faced constant labor shortages throughout the late nineteenth and early twentieth centuries. The Grande Seca thus benefitted them immensely. Starving Cearenses, whom the rubber industry “desperately needed,” cared little about working conditions as long they were paid, and so accepted jobs few others dared to take—among them tapping Hevea trees in a hot, disease-ridden rainforest.
During the Grande Seca, Ceará became a key state for labor recruiters from Amazonas and Pará. In 1916, Joseph Woodroffe, a European eyewitness, claimed immigration to the Amazon Valley consisted exclusively of Cearenses, largely in response to the drought. Weinstein, Barham and Coomes, Caviedes, and Resor also acknowledge the Grande Seca’s role in driving poor Cearenses to the jungle, where they supported plantations as cheap tappers (seringueiros). But despite catastrophic death tolls from 1877 onward, emigration did not find universal support in Ceará. On the contrary, Cearense and its editors openly opposed the state’s depopulation for economic and humanitarian reasons.
Cearense arranged the debate as follows. On 15 April 1877, an “enlightened friend” in Sobral noted: “We continue to think … one of the most useful ways of applying aid, to which the State is obligated, would be … to promote seriously the emigration of our population to more fertile and almost unpopulated regions of other provinces.” Several pages later, however, a sordid column lamented the fact that thirty refugees had recently arrived in Fortaleza, Ceará’s capital, and hoped to reach the Amazon Valley. “This idea of emigration to other provinces,” the author mused, “is of incalculable disadvantages to Ceará.” Cearense’s publishers agreed, as future editions only “supported” emigration insofar as they acknowledged opposing views and occasionally allowed independent writers to criticize their claims.
The paper solidified its stance on 18 April. Emigration to Amazonas and Pará, it argued, was “harmful … to [Ceará] … because it [ripped out] a large number of strong arms for plowing.” Over the next seven months, such fear came up time and again. In July, for example, one writer professed concern for the state’s future: “…supposing [the drought] is transitory, how will we repopulate our deserted hinterlands if we remove … by means of a broad emigration, their natural habitants?” Together, these columns typified a standard economic argument against outmigration, namely that Ceará would need people to rebuild once the Grande Seca passed, and therefore could not absorb any more losses than necessary. But this only explains some of Cearense’s hostility toward open borders.
Though principally worried about Ceará’s financial prospects, educated nordestinos also expressed sympathy for destitute workers. Cearense printed articles throughout 1877 noting that rubber jobs in Amazonas and Pará were difficult and exploitative. On 18 April, the paper published several letters from Father José Thomaz, “who painted with blackest colors the luck of the poor emigrant, who is there [in the Amazon] reduced to the hardest and cruelest captivity by the rubber tappers to whom he hires his services.” Another pundit claimed Cearenses who left for Amazonas would likely “perish in the swamps.”
As more reports of emigration made their way into Cearense, so too did overt warnings. “Our wretched brothers who have gone to [Amazonas] have suffered horrible trials,” wrote one author on 18 October. Yet faced with certain death by disease or starvation, Cearenses continued to flee. By 23 September, at least 1,552 had crossed into the Amazon Valley, followed by hundreds more before the end of the year. Most left for rubber plantations.
Cearenses migrated by the thousands to Amazonas and Pará at the same time Brazil’s first rubber boom began (early 1880s). Those dates are no coincidence. While Amazonian elites owed their success to many different factors, drought-stricken nordestinos provided the foundation. Without adequate labor, there would never have been a rubber industry, let alone a profitable one.
Late nineteenth and early twentieth century Brazilian rubber production had far-reaching environmental consequences. When Emperor Pedro II created the province of Amazonas in 1850, Manaus, its capital, comprised little more than “a small collection of mud huts.” That changed rapidly as speculators flooded the region. The Amazonian North’s population quadrupled from 250,000 in 1853 to almost one million in 1910. Manaus and its Paraense counterpart, Belém, benefitted immensely: electricity, streetcars, exquisite theaters, and large ports graced the once-barren cities. Countless new rubber trails cut through the rainforest as well, in addition to increased traffic on the river. That said, the industry’s initial emphasis on wild Hevea trees delayed mass deforestation for several decades, while industrial cattle ranching, which would have required a dramatic physical reorganization of the Amazon Valley, lacked sufficient investment.
Droughts have shaped Northeastern Brazil for centuries, yet the Grande Seca stands out. Not only was it longer and drier than most, but it also came at a time of profound demographic and economic transformation in Brazil. That increased its death toll and its consequences for the human and environmental histories of Brazil.
The past, like the present, proves Earth’s interconnectedness. Environmental shifts “over there” will eventually affect us “here.” More than one hundred years ago, warming water in the Pacific Ocean changed the course of Brazilian history, driving extraordinary investment in the previously untapped Amazon Valley. In the same way, natural disasters, rising seas levels, and other symptoms of global warming will inevitably influence how all of us live our lives, regardless of geography.
There is no running away. We must face this crisis together.
Barham, Bradford L., and Oliver T. Coomes. Prosperity’s Promise: The Amazon Rubber Boom and Distorted Economic Development. Boulder: Westview Press, 1996.
Burns, E. Bradford. A History of Brasil: Third Edition. New York: Columbia University Press, 1993.
Caviedes, César N. El Niño in History: Storming Through the Ages. Gainesville: University Press of Florida, 2001.
Cearense. Fortaleza, Ceará. 1877. Available at: http://bndigital.bn.gov.br/hemeroteca-digital.
Glantz, Michael H. Currents of change: El Niño’s impact on climate and society. Cambridge: Cambridge University Press, 1996.
Gergis, Joëlle L., and Anthony M. Fowler. “A history of ENSO events since A.D. 1525: implications for future climate change.” Climatic Change 92, nos. 3-4 (2009): 343-387.
O Retirante. Fortaleza, Ceará. 1877. Available at: http://bndigital.bn.gov.br/hemeroteca-digital.
Pedro II. Fortaleza, Ceará. 1877. Available at: http://bndigital.bn.gov.br/hemeroteca-digital.
Quinn, William H. “A study of Southern Oscillation-related climatic activity for A.D. 622-1900 incorporating Nile River flood data.” In El Niño: Historical and Paleoclimatic Aspects of the Southern Oscillation, edited by Henry F. Diaz and Vera Markgraf, 119-150. Cambridge: Cambridge University Press, 1992.
Resor, Randolph R. “Rubber in Brazil: Dominance and Collapse, 1876-1945.” The Business
History Review 51, no. 3 (1977): 341-366.
Villa, Marco Antonio. Vida e morte no sertão: História das secas no Nordeste nos séculos XIX e XX. Editora Ática: São Paulo, 2000.
Weinstein, Barbara. The Amazon Rubber Boom: 1850-1920. Stanford: Stanford University Press, 1983.
Woodroffe, Joseph F. The Rubber Industry of the Amazon and How Its Supremacy Can Be Maintained. Edited by Harold Hamel Smith. London: T. Fisher Unwin and Bale, Sons and Danielsson, 1916. Available at: https://archive.org/details/rubberindustryof00woodrich.
Dr. Bathsheba Demuth, Brown University
Most students at Brown University know Professor Kathleen Hess from the two-semester challenge of organic chemistry. But in a class that debuted this fall, “Exploration of the Chemistry of Renewable Energy,” Dr. Hess blended the tools of her discipline with questions of human impacts on the climate, renewable energy technologies, and the social impact of how energy is generated and used. The result is a socially-engaged course blending social science and bench science. “I thought this would be a perfect way to teach students who were not science majors,” Hess explains. “That was my goal.”
Courses on climate or energy history, renewable energy, and the relationship between climate and society are now taught at universities and colleges across the country. Most are designed by faculty in humanities, earth science, or engineering departments. Hess’s class offers a new model. Inspired by the Chemistry Collaborations, Workshops, and Communities of Scholars (cCWCS) pedagogy seminars, Hess's syllabus combines interdisciplinary readings, guest lectures, writing assignments - and laboratory experiments. “I wanted to give students both background on the topic,” she says, “and then give them the hands-on experiment so they would have practical experience.”
The course began by examining why renewable energy sources are increasingly important. Students read about fossil fuel pollution, climate change, and energy politics. They also did lab experiments to calculate how much energy is required to light a classroom. Then the syllabus moved on to examine batteries, fuel cells and solar panels. Hess framed each topic around a question. “Scientists should always be enquiring rather than saying we’re just going to the lab to make such and such,” Hess says. In one case, the class spent several weeks researching sources and uses of biofuel energy. Then students went to the lab to make fuel out of food waste from the Brown dining halls. “The students were really excited about this,” Hess notes. But when the class compared the energy yield to other fuels, “there was a lot of ‘oh, this is why we don’t do this,’” Hess says. “It was more of an illustration than just looking at another graph, because they saw and understood the processes involved.”
In another case, students produced acid rain in a petri dish. Unlike history or policy classes, where acid rain is a topic – or most chemistry classes, where experiments are done in solution - Hess’s students saw “how concrete and bridges erode, and saw how materials travel through the air.” Students designed experiments to measure individual carbon emissions. In another experiment the class made their own hydrogen fuel cells. It required working with hydrochloric acid. Hess says hands-on exercises like this generated a great deal of student enthusiasm – not just energy between fuel cells – but were also complex and delicate. This was sometimes a challenge for students not used to the lab sciences. “Sometimes just getting ready to do the labs,” Hess says, “took some time and explaining. Sometimes they didn’t know how to start. So there could be a bit of inertia there.” Overall, however, Hess found “the level of student interest was really high. At the end of the course of the students told me that none of the lab assignments felt like homework, because they were so enjoyable.”
Across case studies, Hess linked the experiments back to social, political, and economic questions. Hess says her class arrived with “quite a few preconceived notions about why people believe in global warming or not, why they’re interested in renewable energy or not.” Through readings and lectures that covered climate change, the development of the current energy grid, the history of the electric car, the use of solar panel systems, and how humans have used different energy sources in the past, students started thinking about “how none of them have ever lived without power – without a light switch to turn on.” Student read about everything from global energy transitions to oil company correspondence about fossil fuel development. “I wanted them to see that we can always judge why people use the resources they do,” Hess explains, “but there are multiple sides to the story.”
Seeing these multiple sides helped students understand how the physical principles and technologies they were learning about in the lab “was one thing, but how to incorporate it into society is another,” Hess says. She had each student choose a renewable technology – from algal biofuels to concentrated solar – and design a brochure to convince consumers to use a new source of energy. Students also presented the results of their alternative energy research to the class. For Hess, this was the most inspiring part of the course. As each student learned to combine their technical knowledge from labs with their research on specific fuels, she says “they felt that was encouraging because they had to come up with an alternative energy to talk about, and knew collectively about all these different options.”
While thinking about climate change and the future is often discouraging and leaves individuals unsure how to respond, Hess found this course affirmed her sense that “education is the first step away from not knowing what to do. Especially mindful education where we don’t just judge things, but examine the combination of physical processes and assumptions that make them happen.” The best approaches to teaching climate change often combine perspectives from many disciplines, from the sciences to the humanities.
Dr. Dagomar Degroot, Georgetown University
The world is warming, and it is warming fast. According to satellites and weather stations, Earth's average annual temperature will smash the instrumental record this year, likely by around 0.1° C. Last year, global temperatures broke the record by around the same amount. That may not seem impressive, but consider this: temperatures have climbed by about 0.1° C per decade since the 1980s. In just two years, therefore, our planet catapulted two decades into a hotter future.
Global climate change on this scale, with this speed, is unprecedented in the history of human civilization. Yet that history has still coincided with other, smaller but still impressive changes in Earth's climate. Humans may have played a minor role in some of these changes. The key culprits, however, were often violent explosions on Earth that coincided with periods of unusual solar activity. The most dramatic climate changes usually involved global cooling, not warming. The consequences for communities and societies around the world could be profound, in ways that offer lessons for our fate in a changing climate.
One of the coldest periods in the history of human civilization started in the early sixth century CE. Growth rings imprinted in the bark of trees suddenly narrow around 536 CE, and again around 541 CE. This narrowing reveals that trees practically stopped growing as Northern Hemisphere temperatures plunged by as many as 3°C, relative to long-term averages.
Other scientific "proxy" sources that responded to past climate changes reveal the same trend. A large team of interdisciplinary scholars, led by Ulf Büntgen, recently concluded that 536 CE was the first year of a "Late Antique Little Ice Age" - not to be confused with the better-known Little Ice Age of the early modern period - that chilled the Northern Hemisphere and perhaps the globe until 660 CE.
What could have caused this cooling? Cosmogenic isotopes tell us that solar activity had been falling for more than a century, as the sun gradually entered a "grand solar minimum." But that does not explain why Earth's climate changed so profoundly, and so abruptly, in the early sixth century CE.
Scientists now believe that ice cores containing traces of volcanic ash provide compelling evidence for a remarkable series of major eruptions, in 536, 540, and 547 CE. Big volcanic eruptions in the tropics can cool the Earth by releasing sunlight-scattering sulphur into the atmosphere. Trade winds swirling up from the equator bring this sulphur into both hemispheres, which ultimately creates a global volcanic dust veil. When eruptions happen in quick succession, Arctic sea ice can expand dramatically. Since bright sea ice reflects more sunlight than water, the Earth cools in response, which of course leads to more sea ice, more cooling, and so on.
Catastrophic volcanic eruptions, coinciding as they did with a prolonged decline in solar activity, may well have released enough aerosols into the atmosphere to usher in a much cooler climate. Yet sixth-century layers in Greenlandic ice cores may also suggest a very different, and even more exotic, culprit for climatic cooling.
Somehow, microscopic marine organisms of a kind normally found near tropical coasts ended up in ice layers that correspond to 536 and 538 CE. Layers dating from 533 CE also hold nickle and tin, substances that rarely appear in Greenlandic ice. Both metals are common in comets, however.
A team of scientists led by Dallas Abott recently concluded that dust from the tail of Halley's Comet may have started cooling the Earth as early as 533 CE. By reconstructing the past orbits of the comet, scientists discovered that it made a particularly close pass around the Sun in 530 CE. At around that time, Chinese astronomers recorded a remarkably bright comet in the night sky.
Earth regularly passes through debris left in the wake of Halley's Comet, and that debris might have been especially dense in the 530s and 540s. Meteor showers, therefore, may well have left cooling dust in the atmosphere, and metals in the ices of Greenland.
Tidal forces created by the gravity of a massive object - such as the Sun - can easily fragment cometary nuclei, most of which are collections of rubble left over from the primordial solar system. Dust released by such a breakup can dramatically brighten a comet. Perhaps that is what Chinese scientists witnessed in 530 CE, as Halley's Comet swung around the Sun.
ccording to Abbott and her coauthors, a piece of the comet may then have collided with Earth, launching sea creatures high into the atmosphere. Melted metal and gravity anomalies in the Gulf of Carpentaria off Australia suggest that an impact happened there sometime in the first millennium CE. At around the same time, aboriginal Australians etched symbols into caves that may well have represented comets.
It may well be that an extraordinary confluence of extraterrestrial impacts and volcanic eruptions, coinciding with a gradual fall in solar activity, chilled the Earth in the 530s and 540s CE. These dramatic environmental changes naturally astonished contemporary writers. In 536 CE, Procopius of Caesarea, a major scholar of the Eastern Roman Empire, wrote that the “sun gave forth its light without brightness, like the moon.” According to John of Ephesos, “there was a sign in the sun the like of which had never been seen and reported before in the world . . . The sun became dark and its darkness lasted for one and a half years."
A Syrian chronicler recorded that "The earth and all that is upon it quaked; and the sun began to be darkened by day and the moon by night." Chinese astronomers lost sight of Canopus, one of the brightest stars in the night sky. If there was a dust veil, it may well have been thick enough to obscure the heavens, whatever its origins.
Cassiodorus, a Roman statesman in the service of the Ostrogoths, wrote perhaps the most striking descriptions of the changes in Earth's atmosphere. "Something coming at us from the stars," he explained, had led to a "blue colored sun," a dim full moon, and a "summer without heat." Amid "perpetual frosts" and "unnatural drought," plants refused to grow and "the rays of the stars have been darkened." The cause, to Cassiodorus, must be high in the atmosphere, for "things in mid-space dominate our sight," and the "heat of the heavenly bodies" could not penetrate what seemed like mist.
Of course, we must guard against the assumption that observers such as Cassiodorus or Procopius simply recorded what they saw in the natural world. Descriptions of environmental calamities in ancient, medieval, and even early modern texts can be allegorical, representing social, not environmental developments. Still, many authors wrote eerily similar accounts of the real environmental upheavals in the 530s CE. To the modern eye, that of Cassiodorus in particular may seem to add evidence for a cometary cause of contemporary cooling.
As temperatures plummeted and plants withered, communities around the world suffered. Scientists have examined pollen deposits that reveal sharp drops in the extent of cultivated land across Europe. Shorter growing seasons probably led to food shortages and famines that emptied once-thriving villages. Archaeological evidence suggests, for example, that Swedes abandoned most of their population centers in the sixth century, which were then swallowed by forests. Swedish survivors apparently created new towns in far smaller numbers, in upland areas removed from their former dwelling places.
Famines may have had particularly severe consequences across the densely populated Mediterranean. In 533 CE, just as cometary dust may have started entering Earth's atmosphere, the emperor of the Eastern Roman Empire, Justinian I, embarked on a costly campaign to restore the Western Empire. His subsequent wars in the Mediterranean, combined with a war against the Sassanid Empire that erupted in 540 CE, drew precious resources from the imperial countryside. As growing seasons declined, the demands of war compounded food shortages for millions of imperial citizens. Starvation spread through the empire, but worse was to come.
Malnutrition reduces fat-storing cells that produce the hormone leptin, which plays a key role in controlling the strength of the human immune system. In the sixth century, food shortages therefore weakened immune systems on a grand scale, leaving millions of people more vulnerable to disease. Those who survived famines also migrated to new towns or cities, increasing the likelihood that those infected with diseases would spread them.
Unfortunately for the inhabitants of what was left of the Roman Empire, Yersinia pestis, the pathogen behind the bubonic plague, was about to make its first appearance in Europe. From 541 to 542 CE, the “Plague of Justinian,” swept through both the Western and Eastern halves of the Roman Empire, killing as many as fifty million people. In a warmer, more stable climate, the death toll may well have been far lower.
Not surprisingly, Justinian's campaign to retake the Western Empire stalled after the early 530s CE, although the reunified Roman Empire did reach its maximum extent in the 550s CE. Imperial resources were stretched thin, however, and European kingdoms reversed most of the new conquests soon after Justinian's death.
Climatic cooling probably had cultural consequences, too. There are signs, for example, that religious activity surged across Scandinavia as temperatures plunged. In times of crisis, devout Scandinavians offered gold to their gods in a way we might find counterinuitive: by burying it. Dating these underground hoardes is tricky, but it seems that Scandinavians buried most of them in the sixth century CE. These burials contributed to a gold shortage in Scandinavia that would endure for centuries.
The great oral traditions of Norse mythological poetry also date from the sixth century. Most people have heard of Ragnarök: the "twilight of the gods" that ends with the Earth incinerated and reborn. Fewer have come across the concept of Fimbulvetr, the "mighty winter" that heralds the final battle of the gods.
The Prose Edda, a thirteenth-century transcription of Norse mythology, describes Fimbulvetr in vivid detail. “Then snow will drift from all directions," the Edda predicts. "There will then be great frosts and keen winds. The sun will do no good. There will be three of these winters together and no summer between.” According to the Poetic Edda, a collection of poems also committed to writing in the thirteenth century, “The sun turns black . . . The bright stars vanish from the sky.”
These precise descriptions of an apocalyptic winter have no parallel in other religious texts or mythical traditions. Instead, they echo the sixth-century reports of Cassiodorus, Procopius, and other astonished observers of real environmental transformations. Scandinavians fleeing their homes amid catastrophic cooling may well have felt like they were living through a preview of the apocalypse.
The trauma caused by sixth-century environmental changes may therefore be imprinted on Norse mythology. Ideas of a new world in the wake of Ragnarök may also reflect the consequences of real events, such as the new settlements and cultures that emerged amid climatic cooling.
Can these ancient calamities offer any lessons for our warmer future? Perhaps. They suggest, for example, that complex, densely populated societies, far from being insulated from the effects of climate change, may actually be most at risk. When populations brush up against the carrying capacity of agricultural land, sudden environmental shifts can be catastrophic. In these situations, societies already embroiled in resource-draining wars could be particularly vulnerable. The consequences of sixth-century cooling hint, also, that responses to even short-lived climatic upheavals can profoundly alter cultures in ways that endure for centuries, or even millennia.
Ancient societies, of course, have little similarity to our own. Yet their struggles in periods of dramatic climate change may still shed some light on our prospects in a warming world. To understand the future, we would be well served to look back at the distant past.
Abbott, Dallas H., Dee Breger, Pierre E. Biscaye, John A. Barron, Robert A. Juhl, and Patrick McCafferty. "What caused terrestrial dust loading and climate downturns between AD 533 and 540?." Geological Society of America Special Papers 505 (2014): 421-438.
Arjava, Antti. "The mystery cloud of 536 CE in the Mediterranean sources." Dumbarton Oaks Papers 59 (2005): 73-94.
Axboe, Martin. "The year 536 and the Scandinavian gold hoards." Medieval Archaeology 43 (1999).
Gräslund, Bo, and Neil Price. "Twilight of the gods? The ‘dust veil event’ of AD 536 in critical perspective." Antiquity 86:332 (2012): 428-443.
Hamacher, Duane W. "Comet and meteorite traditions of Aboriginal Australians." Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures (2014): 1-4.
Widgren, Mats. "Climate and causation in the Swedish Iron Age: learning from the present to understand the past." Geografisk Tidsskrift-Danish Journal of Geography 112:2 (2012): 126-134.
Dr. Tim Newfield, Princeton University, and Dr. Inga Labuhn, Lund University.
Carolingian mass grave, Entrains-sur-Nohain, INRAP.
Will climate change trigger widespread food shortages and result in huge excess mortality in our future? Many historians have argued that it has before. Anomalous weather, abrupt climate change, and extreme dearth often work together in articles and books on early medieval demography, economy and environment. Few historians of early medieval Europe would now doubt that severe winters, droughts and other weather extremes led to harvest failures and, through those failures, food shortages and mortality events.
Most remaining doubters adhere to the idea that food shortages had causes internal to medieval societies. Instead of extreme weather or abrupt climate change, they blame accidents of (population) growth, deficient agrarian technology, unequal socioeconomic relations and weak institutions. Yet only rarely they have stolen the show or dominated the scholarship. For example, Amartya Sen’s “entitlement approach” to subsistence crises, which assigns primary importance to internal processes, has made few inroads in the literature on early medieval dearth, although in later periods it has many adherents.
Of course, the idea that big events have a single cause – monocausality, in other words – rarely convinces historians for long. Famine theorists and historians of other eras and world regions now argue that neither external forces such as weather, nor internal forces such as entitlements, alone capture the complexity of food shortages. They propose that these two explanatory mechanisms, often labeled “exogenous” and “endogenous,” respectively, should not be considered independent of one another or mutually exclusive. To them, periods of dearth can be explained by environmental anomalies, like unusual and severe plant-damaging weather, that coincide with socioeconomic vulnerability and declining (for most people) entitlement to food.
These explanations are more convincing. It seems that diverse factors acted in concert to cause, prolong and worsen food shortages. But proof for complex explanations for dearth in the distant past is hard to come by. Though they can be misleading, simpler, linear explanations are much easier to pull out of the extant evidence. This is true even when the sources are plentiful, as they are, at least by early medieval standards, for some regions and decades of Carolingian Europe. Food shortages in the Carolingian period, especially those that occurred during the reign of Charlemagne, have attracted the attention of scholars since the 1960s.
Left: Bronze equestrian statuette of Charlemagne or possibly his grandson Charles the Bald (823-877). Discovered in Saint-Étienne de Metz and now in the Louvre. The figure is ninth century in date. The horse might be earlier and Byzantine. Charles the Bald ruled the western portion of the post-Verdun empire, although whether he was actually bald is still debated.
Right: A Carolingian denarius (812-814) depicting Charlemagne. The Charlemagne of the Charlemagne reliquary mask (Center) is handsomer. The coin, though, is contemporary and the bust is from the mid fourteenth century. Housed in the Aachener Dom’s treasury, it contains a skullcap thought to be that of the emperor.
For the Carolingian period, ordinances from the royal court, capitularies, reveal hoarding and speculation, and document official attempts to control the prices and movements of grain, while annalists and hagiographers recount severe winters and droughts. All of this evidence sheds light on dearth. Yet the legislative acts point to internal pressures on food supply, while the narrative sources highlight external ones. As we have seen, neither pressure adequately explains subsistence crises alone.
Unfortunately, however, we rarely have evidence for endogenous and exogenous factors at the same time. Around the year 800, when Leo III crowned Charlemagne imperator, most evidence for dearth comes from the capitularies. Before and after, narrative evidence dominates. So Charlemagne’s food shortages appear to have had internal drivers, and Charles the Bald’s external ones. Or so the written sources lead us to believe.
Carolingian Europe as of August 843 following the Treaty of Verdun. Under rex and imperator Charlemagne (742-814), Carolingian territory stretched to include the area of Europe outlined here.
Fortunately, evidence from other disciplines allows historians to fill in some of the gaps. External pressures are easier to establish by turning to the palaeoclimatic sciences. Using them, we are beginning to rewrite the history of continental European dearth, weather and climate from 750 to 950 CE. We are working on a new study that combines a near-exhaustive assessment of Carolingian written evidence for subsistence crises and weather with scientific evidence for changes in average temperature, precipitation, and volcanic activity (which can influence climate).
We are trying to answer some big questions, such as: What role did droughts, hard winters and extended periods of heavy rainfall have in sparking, prolonging or worsening Carolingian food shortages? Were these external forces the classic triggers of dearth that many early medievalists think they were?
Indicators of past climate embedded in trees and ice can test and corroborate observations of anomalous temperature and precipitation. For instance, the droughts of 794 and 874 CE, documented respectively in the Annales Mosellani and Annales Bertiniani, show up in the tree ring-based Old World Drought Atlas (OWDA, see below). Additionally, as McCormick, Dutton and Mayewski demonstrated, multiple severe Carolingian winters also align fairly neatly with atmosphere-clouding Northern Hemisphere volcanism reconstructed using the GISP2 Greenlandic ice core.
The Old World Drought Atlas (OWDA) for 794 and 874. Negative values indicate dry conditions, positive values indicate wet conditions (from Cook et al. 2015).
By marrying written and natural archives, we are able to perfect our appreciation of the scale and extent of the weather extremes that coincide with Carolingian periods of dearth. Yet instead of simply providing answers, our integrated data are raising questions, and pushing us towards a messier history of early medieval food shortage. This is because the independent lines of evidence often do not agree. For example, only two of the 15 driest years between 750 and 950 CE in the OWDA coincide with drought in Carolingian sources.
Admittedly, some of this dissonance may be artificial. The written record for weather and dearth is incomplete. To be sure, some places and times during the Carolingian era, broadly defined as it is here, are poorly documented. So reported drought years can appear kind of wet in the tree-based OWDA in some Carolingian regions (parts of northern Italy and Provence in 794 and 874 for instance).
Moreover, the detailed or “high-resolution” palaeoclimatology available now for early medieval Europe is much better for some regions than others. Tree-ring series extending back to 750 presently exist for few European regions. It is simply not possible to precisely pair some reported weather extremes or dearths to palaeoclimate reconstructions. Indeed, spatially the two lines of evidence can be mismatched. They can also be seasonally inconsistent, as the trees tell us far less about temperature and precipitation in the winter than they do for the summer.
Matches between historical and scientific evidence are therefore generally limited to the growing seasons, in places where written sources and palaeoclimate data overlap. That is enough to yield some surprising results. When the written record is densest, there is natural evidence for severe weather and rapid climate change, but not for food shortages.
Take the dramatic drop in average temperatures registered in European trees at the opening of the ninth century. According to the 2013 PAGES 2K Network European temperature reconstruction, temperatures were cooler around the time of Charlemagne’s coronation than they had been at any time between the mid sixth and early eleventh centuries. This dramatic cooling aligns well with a relatively small Northern Hemisphere volcanic eruption, detected in the recent ice-core record of volcanism led by Sigl. The eruption would have ejected sunlight-scattering sulfur aerosols into the atmosphere. Notably, larger events in the Carolingian era, like those of 750, 817 and 822, clearly had less of an influence on European temperature. The cold of 800 is equally pronounced but less unusual in a tree-based temperature reconstruction from the Alps. In this series, the late 820s are remarkably cooler.
Documentary sources register the falling temperatures. The Carolingian Annales regni francorum report severe growing-season frosts (aspera pruina) in 800. The Irish Annals of Ulster document a difficult and mortal winter in an entry quite possibly misdated in the Hennessy edition at 798 (799 or the 799/800 winter is more likely). Yet surprisingly, there is no contemporary record of food shortages in Europe.
Top: European Temperature Reconstruction, 0-2000 CE (data from Pages 2K Consortium, 2013).
Bottom: Middle Red: PAGES 2K 2013 Consortium European temperatures; Middle Burgundy: Büntgen et al 2011 Alpine temperature reconstruction; Top: Sigl et al 2015 ice-core record of Global Volcanic Forcing (GVF); Bottom: Written evidence for food shortages, both famines (F) and lesser shortages (LS). ‘W’ indicates no evidence for dearth but evidence for extreme weather. Between 750 and 950 we have identified 23 food shortages: 12 spatially and temporally circumscribed lesser shortages and 11 large multi-year famines.
Scholars tend to focus on instances when the written evidence for dearth and the natural evidence for anomalous weather align tidily. It seems that just as often, however, the two lines of evidence do not match so neatly. Severe weather may not always have triggered dearth in the early Middle Ages. Contemporary peoples could apparently cope with weather extremes in ways that allowed them to escape food shortages.
Early medieval vulnerability to external forces of dearth seems to have varied over space and time. We need to investigate the contrasting abilities of peoples from different early medieval regions and subperiods, participating in distinct agricultural economies with their own agrarian technologies, to withstand plant-damaging environmental extremes.
Several studies already suggest early medievals were capable of responding to gradual climate change. But to argue that they were not rigid or helpless when faced with marked seasonal temperature or precipitation anomalies, we must first identify, from sparse sources, potential moments of resilience. In this we run the risk of reading too much into absences of evidence. Yet the conclusion seems inescapable: when written sources are relatively abundant and there is no record of dearth during notable deviations in temperature and precipitation, early medievals must have adapted successfully.
Going forward, we must identify both moments and mechanisms of early medieval resilience in the face of climate change. Teasing these out from diverse sources might be tough going, but these elements are missing from the history of early medieval dearth and climate. Their omission has allowed for misleadingly neat histories of climate change and disaster in the period. Similar problems might well plague other histories that too clearly link climate changes to food shortages and mortality crises. Research that complicates these links could offer compelling new insights about our warmer future.
Authors' note: this is a short sampling of a much longer and more detailed multidisciplinary examination of Carolingian dearth, weather and climate, currently in preparation.
P. Bonnassie, “Consommation d’aliments immondes et cannibalisme de survie dans l’Occident du Haut Moyen Âge” Annales: Économies, Sociétés, Civilisations 44 (1989), pp. 1035-1056.
U. Büntgen et al, “2,500 Years of European Climate Variability and Human Susceptibility” Science 331 (2011), pp. 578-582.
U. Büntgen and W. Tegel, “European Tree-Ring Data and the Medieval Climate Anomaly” PAGES News 19 (2011), pp. 14-15.
F. Cheyette, “The Disappearance of the Ancient Landscape and the Climatic Anomaly of the Early Middle Ages: A Question to be Pursued” Early Medieval Europe 16 (2008), pp. 127-165.
E. Cook et al, “Old World Megadroughts and Pluvials during the Common Era” Science Advances 1 (2015), e1500561.
S. Devereux, Theories of Famine (Harvester Wheatsheaf, 1993).
R. Doehaerd, Le Haut Moyen Âge occidental: Economies et sociétés (Nouvelle Clio, 1971).
P.E. Dutton, “Charlemagne’s Mustache” and “Thunder and Hail over the Carolingian Countryside” in his Charlemagne’s Mustache and Other Cultural Clusters of a Dark Age (Palgrave, 2004), pp. 3-42, 169-188.
M. McCormick, P.E. Dutton and P. Mayewski, “Volcanoes and the Climate Forcing of Carolingian Europe, A.D. 750-950” Speculum 82 (2007), pp. 865-895.
T. Newfield, “The Contours, Frequency and Causation of Subsistence Crises in Carolingian Europe (750-950)” in P. Benito i Monclús ed., Crisis alimentarias en la edad media: Modelos, explicaciones y representaciones (Editorial Milenio, 2013), pp. 117-172.
PAGES 2k Network, “Continental-Scale Temperature Variability during the Past Two Millennia” Nature Geoscience 6 (2013), pp. 339-346.
K. Pearson, “Nutrition and the Early Medieval Diet” Speculum 72 (1997), pp. 1-32.
A. Sen, Poverty and Entitlements: An Essay on Entitlement and Deprivation (Oxford University Press, 1981).
M. Sigl et al, “Timing and Climate Forcing of Volcanic Eruptions for the Past 2,500 Years” Nature 523 (2015), pp. 543-549.
P. Slavin, “Climate and Famines: A Historical Reassessment” WIREs Climate Change 7 (2016), pp. 433-447.
A. Verhulst, “Karolingische Agrarpolitik: Das Capitulare de Villis und die hungersnöte von 792/793 und 805/806” Zeitschrift fur Agrargeschichte und Agrarsoziologie 13 (1965), pp. 175-189.
Dr. Bathsheba Demuth, Brown University.
The Greenlandic coast. Source: TheBrockenInaGlory, Wikimedia Commons, 2005, commons.wikimedia.org/wiki/File:Greenland_coast.JPG
In the year 1001 CE, Leif Erikson made landfall in Greenland, and traded with people who “in their purchases preferred red cloth; in exchange they had furs to give.” The Vikings called these people Skraelings. Present-day archeologists and historians call them the Thule. At its height, Thule civilization spread from its origins along the Bering Strait across the Canadian Arctic and into to Greenland. The ancestors of today’s Inuit and Inupiat, the Thule accomplished what Erikson and subsequent generations of Europeans never managed: living in the high Arctic without supplies of food, technology, and fuel from more temperate climates.
The Thule left archeological evidence of a technologically sophisticated, vigorous people. They invented the umiak, an open walrus-hide boat so large that it was sometimes equipped with a sail. These boats, when used alongside small, nimble kayaks, made the Thule formidable marine-mammal hunters. On land, they harnessed dogs to sleds and built homes half-underground, insulated by earth and beamed with whale bones.
People did inhabit the high North American Arctic before the Thule. Their immediate predecessors, called the Dorset by archeologists, were expert carvers, and there are signs of other cultures that date back at least five thousand years. But the Thule appear to have been a particularly robust society, one that inhabited thousands of challenging Arctic miles. Eventually, they even traded with Europeans for metal tools, sending walrus ivory as far abroad as Venice.
Thule migration routes from the Bering Strait east. Map credit: anthropology.uwaterloo.ca/ArcticArchStuff
In the twentieth century, many archeologists linked the success of the Thule to the climate. In this view, rapid Thule expansion coincided with the Medieval Warm Period in the years between 1000 and 1300. The Thule were expert whalers, especially of bowhead whales. This slow species makes for good prey. Their 100-ton bodies can be fifty percent fat by volume, giving people ample calories to eat and burn through long winters. With the slight increase in temperature during the Medieval Warm Period, the theory went, the range of the bowhead whale expanded across newly ice-free waters. Atlantic and Pacific bowhead populations eventually met in the Arctic Ocean north of Canada, offering an uninterrupted banquet of blubber to hunters.
The Thule, in this view, were simply whale hunters who followed the migration of their prey in a warming climate. Environmental conditions, not a sophisticated culture, was the key explanation for their success. Emphasizing climate as the cause of migration and social success reduced the achievements of the Thule, essentially, to those of their prey.
However, twenty-first century evidence is changing this account of Thule migration. In 2000, Robert McGhee questioned the validity of the radiocarbon dates that helped establish Thule expansion as an eleventh-century phenomenon. He proposed the 1200s as the earliest date of migration. Then, genetic tests by marine biologists showed that Atlantic and Pacific bowhead whales did not mix their populations during the Medieval Warm Period, meaning that there was a substantial gap in whaling possibilities on the Arctic coast.
Something more complicated than just following the blubber drove the Thule eastward. McGhee speculated that communities moved for iron, which is short supply in the Arctic. Thule hunters learned from the Dorset people of a deposit left by the Cape York meteorite. They colonized huge territories to secure their access to this precious resource from outer space. Other specialists theorized that population pressure, overhunting, or warfare led the Thule to migrate east.
Thule archeological site, with whalebone beams among flooring stones. Photo credit: anthropology.uwaterloo.ca/ArcticArchStuf
The ongoing work of Canadian archeologists T. Max Friesen and Charles D. Arnold seems to confirm that we must look beyond simple climatic explanations for the Thule expansion. Working on Beaufort Sea and Amundsen Gulf sites, the pair established that there was no definitive Thule occupation in this part of the western Arctic prior to the thirteenth century. Because any Thule migrants would have had to pass through these points as they moved east, their research indicates that the Thule civilization was only beginning its continental spread around the year 1200, well into the period of warming. The climate may have helped the Thule quickly spread toward Greenland, but the onset of the Medieval Warm Period did not automatically draw people eastward.
Moreover, the work of other archeologists on the Melville Peninsula, along Baffin Bay, indicates that the Mediaeval Warm Period was not always so warm. Some areas of the Arctic saw slight temperature increases, but in general the millennium was cooler than those past. In places, the effects of the so-called Little Ice Age began a century or two before they were evident across the globe, meaning the Thule adapted not to a warmer Arctic, but a colder one. This cooling was more apparent in the west, where the team found fewer Thule sites but also more stability, both in the climate and the record of human occupation. To the east of the Melville Peninsula, where temperatures did warm, the climate was also more variable – adding a new set of complexities to social and economic life. The move into the central Arctic, therefore, reflected forces other than climate.
Beginning in the fifteenth century, Thule culture fragmented, specialized, and emerged eventually as distinct contemporary Inuit and Inupiat groups. The Little Ice Age is often the reason given for the disintegration of Thule civilization in the fifteenth century. Yet, the work by Finkelstein, Ross, and Adams indicates that, while the Thule abandoned some sites due to cooling trends, this did not hold in all cases. Other causes, including increased contact with Europeans and their infectious diseases, might have had more to do with the disintegration in some locations.
Overall, the new vision of the Thule prominence in the Arctic makes their rise shorter, but even more impressive. And if the Thule began their migration only in 1200, it seems unlikely they spread east simply to find iron. This would have required only smaller-scale movements to precise locations. Instead, the Thule developed a thriving, intricate network of settlements across the Arctic. For Friesen and Arnold, this is evidence that the Thule expanded in order to recreate the ideological and economic lives that they had enjoyed in their origins along the Bering Strait. And in just a century they did, not only by inhabiting land from the Bering Strait to Greenland, but through explorations to the northern edges of the continent.
All of this also helps us reinterpret a well-known tale from the Viking exploration of the Arctic. When Leif Erikson’s sister Freydis frightened off a band of Skraelingar in the early eleventh century by striking “her breast with the naked sword” of a fallen Viking, she was likely not fighting the Thule, as scholars have assumed. Perhaps it was the Dorset people that “were frightened, and rushed off in their boats.” The Thule, at least, were likely still a century away from the eastern Canadian coastline. They were not easily daunted either by a shifting climate or by Viking weapons.
Quotes from the Saga of Erik the Red, English translation by J. Sephton, can be found here: http://www.sagadb.org/eiriks_saga_rauda.en
Friesen, T. Max and Charles D. Arnold. “The Timing of the Thule Migration: New Dates from the Western Canadian Arctic,” American Antiquity 73 (2008): 527-538.
Finkelstein, S.A., J.M Ross, and J.K Adams. “Spatiotemporal Variability in Arctic Climates of the Past Millennium: Implications for the Study of Thule Culture on Melville Peninsula, Nunavut,” Arctic Antarctic, and Apline Research 41 (200): 442-454.
McGhee, Robert. “Radio Carbon Dating and the Timing of the Thule Migration,” in Appelit, M. Berglund, J, and Gullov, H.C. eds. Identities and Cultural Contacts in The Arctic: Proceedings from a Conference at the Danish National Museum. Copenhagen (2000): 181-191.
Morrison, David. “The Earliest Thule Migration.” Canadian Journal of Archaeology 22( 1999): 139-156.
Betts, Matthew, and T. Max Friesen, “Quantifying Hunter-Gatherer Intensification: A Zooarchaeological Case Study form Arctic Canada,” Journal of Anthropological Archaeology 23 (2004): 357-384.
Dyke, Arthur S., James Hooper, and James M. Savelle. “A History of Sea Ice in the Canadian Arctic Archipelago based on Postglacial Remains of the Bowhead Whale (Balaena mysticetus)”, Arctic 49 (1996): 235-255.
Park, Robert W. “The Dorset-Thule Succession Revisited,” in Appelit, M. Berglund, J, and Gullov, H.C. eds. Identities and Cultural Contacts in the Arctic: Proceedings from a Conference at the Danish National Museum. Copenhagen (2000): 192-205.
Dr. Gabriel Henderson, Aarhus University
Counting in everyday life is a relatively straightforward affair; one, two, three, and on and on. Less simple is the process of reliably counting the number of sunspots on the surface of the sun. Sunspots are darkened areas on the solar surface. In Europe, people knew of their existence at least since the early 17th century, and some of the larger sunspots were probably noted long before Galileo. Elsewhere, sunspot counts were maintained for much longer. Counting these darkened areas is one of the most effective ways to establish a record of the evolution in solar behavior. Not only do sunspot observations provide crucial information about changes in the sun’s magnetic field, they strongly correlate with long-term fluctuations in the amount of energy released by the sun – the so-called solar cycle.
Yet, in the 1970s, counting sunspots signified something much more dramatic and nefarious about the history of science itself. In these years, John “Jack” Eddy, an astrophysicist with the National Center for Atmospheric Research, began to scour old, dusty books in library basements to resuscitate a long-forgotten event in the history of solar behavior, behavior that seemed completely at odds with the prevailing orthodox understanding of the sun. Despite what appeared to be the historic and predictable vacillation in the number of sunspots every eleven years, a regularity known to exist since the mid-19th century, Eddy noticed in his records what appeared to be the virtual absence of sunspots between 1645 and 1715.
This curious blemish in the solar record was no small discovery. “If it really happened,” Eddy noted in one of his earliest talks on the matter, “we should recognize it as perhaps the most drastic thing that has ever happened to the sun since we began observing it and start including it in our work on the solar cycle." (Eddy, 1974) The implication was obvious: if the sun acted regularly and predictably every eleven years or so, how does one explain the disappearance of sunspots for almost a century?
The lack of sunspots was not Eddy’s discovery, at least not in the purest sense. What he called the Maunder Minimum had been observed almost a century earlier by British astronomer Edward Walter Maunder, who began to publish his findings during the 1890s. To Eddy’s consternation, however, Maunder’s discovery appeared to have been forgotten by the astrophysics community. How could this be? Scientific observations and facts don’t just disappear, do they? To Eddy, the answer was yes. A cursory glance at the matter yielded at least one possible reason why: Maunder was not vocal enough about his discovery. But further research yielded a much richer narrative, one that compelled Eddy to examine the deeply held assumptions of his own profession.
Eddy’s investigation, as it turned out, showed that Maunder was not forgotten merely because of his inability to properly disseminate his finding about sunspots, but rather because the astrophysical community had – for almost a century – allowed their preexisting assumptions to blind them to new ideas. A conspiracy had taken place, Eddy argued, one based in what appeared to be a universal belief that the sun acted regularly and predictably according to the solar cycle – what he called the principle of solar uniformitarianism.
The strength of Maunder’s observations was insufficient to break the universally-accepted canon of solar regularity. Instead of acknowledging and understanding an anomaly in solar behavior, “solar physicists have largely continued to ignore or forget the anomaly, if real,” Eddy insisted in the spring of 1976. “Some have institutionalized the solar cycle and made a profession of extending it into the past and predicting in the future; ignoring, doubting, or intentionally diluting the claims of Maunder of this skeleton in the closet of solar physics." (Eddy, 1976)
This was a dramatic claim, but one that became inextricably interwoven with Eddy’s public admonishment – if not condemnation – of professional orthodoxy within science itself. Eddy wrote about the topic, gave interviews, and addressed scientific and popular audiences – all in the hope that his tempest of activity would lead to Maunder’s long-overdue recognition. But perhaps more poignantly, Eddy portrayed himself as the detective who pulled back the curtain to reveal the biases and prejudices that prevented what he considered to be genuine scientific progress. For him, contemporary astrophysics was a stale and unstable artifice, and only through the work of pioneers like himself – and the forgotten Maunder – could one dispel the fashionable tropes that dictated popular understanding of scientific progress. As he described to an audience within the Boston Museum of Science in May 1978, “In fact, much of what we know, or think we know is not that way at all. And if we have the heart and stomach to look down at it closely, is based upon a shaky and often overextended framework of assumptions – cantilevered scaffolds of bamboo poles and weathered twine.” (Eddy, 1978)
This is an important story in part because it helps to explain why Eddy spoke about sunspots with what historian Kark Hufbauer referred to as “a missionary’s zeal.” (Hufbauer, 1991) But what else does the story show? It certainly does not mean that Eddy’s pioneering work led to a wholesale abandonment of the idea that the sun (for the most part) behaves in a regular, cyclical fashion. That interpretation would be too extreme. However, it would not be too extreme to argue that he used what he considered a crime against Maunder to justify his own predilections as a scientist. Throughout his professional life, he harbored a deep skepticism toward what he saw as scientists’ proclivity for unoriginality and challenged others’ apparent unwillingness to probe the very depths of their own professional, and sometimes erroneous, assumptions. Eddy was comfortable opening the closet.
Eddy, John, "The Long Solar Winter," 1974 December 5, Box 2, John Eddy Papers, National Center for Atmospheric Research.
Eddy, John, ”Maunder Minimum,” 15 April 1976, Box 3, JEP
Eddy, John, ”The Changing Sun,” 28 May 1978, Box 3, JEP
Hufbauer, Karl. Exploring the Sun: Solar Science Since Galileo. Baltimore: Johns Hopkins University Press, 1991.
It's Maunder Minimum Month at HistoricalClimatology.com. This is our first of two feature articles on the Maunder Minimum. The second, by Gabriel Henderson of Aarhus University, will examine how astronomer John Eddy developed and defended the concept.
Although it may seem like the sun is one of the few constants in Earth’s climate system, it is not. Our star undergoes both an 11-year cycle of waning and waxing activity, and a much longer seesaw in which “grand solar minima” give way to “grand solar maxima.” During the minima, which set in approximately once per century, solar radiation declines, sunspots vanish, and solar flares are rare. During the maxima, by contrast, the sun crackles with energy, and sunspots riddle its surface.
The most famous grand solar minimum of all is undoubtedly the Maunder Minimum, which endured from approximately 1645 until 1720. It was named after Edward Maunder, a nineteenth-century astronomer who painstakingly reconstructed European sunspot observations. The Maunder Minimum has become synonymous with the Little Ice Age, a period of climatic cooling that, according to some definitions, endured from around 1300 to 1850, but reached its chilliest point in the seventeenth century.
During the Maunder Minimum, temperatures across the Northern Hemisphere declined, relative to twentieth-century averages, by about one degree Celsius. That may not sound like much – especially in a year that is, globally, still more than one degree Celsius hotter than those same averages – but consider: seventeenth-century cooling was sufficient to contribute to a global crisis that destabilized one society after another. As growing seasons shortened, food shortages spread, economies unraveled, and rebellions and revolutions were quick to follow. Cooling was not always the primary cause for contemporary disasters, but it often played an important role in exacerbating them.
Many people – scholars and journalists included – have therefore assumed that any fall in solar activity must lead to chillier temperatures. When solar modelling recently predicted that a grand solar minimum would set in soon, some took it as evidence of an impending reversal of global warming. I even received an email from a heating appliance company that encouraged me to hawk their products on this website, so our readers could prepare for the cooler climate to come! Of course, the warming influence of anthropogenic greenhouse gases will overwhelm any cooling brought about by declining solar activity.
In fact, scientists still dispute the extent to which grand solar minima or maxima actually triggered past climate changes. What seems certain is that especially warm and cool periods in the past overlapped with more than just variations in solar activity. Granted, many of the coldest decades of the Little Ice Age coincided with periods of reduced solar activity: the Spörer Minimum, from around 1450 to 1530; the Maunder Minimum, from 1645 to 1720; and the Dalton Minimum, from 1790 to 1820. However, one of the chilliest periods of all – the Grindelwald Fluctuation, from 1560 to 1630 – actually unfolded during a modest rise in solar activity. Volcanic eruptions, it seems, also played an important role in bringing about cooler decades, as did the natural internal variability of the climate system. Both the absence of eruptions and a grand solar maximum likely set the stage for the Medieval Warm Period, which is now more commonly called the Medieval Climate Anomaly.
This gets to the heart of what we actually mean when we use a term like “Maunder Minimum” to refer to a period in Earth’s climate history. Are we talking about a period of low solar activity? Or are we referring to an especially cold climatic regime? Or are we talking about chilly temperatures and the changes in atmospheric circulation that cooling set in motion? In other words: what do we really mean when we say that the Maunder Minimum endured from 1645 to 1720? How does our choice of dates affect our understanding of relationships between climate change and human history in this period?
To find an answer to these questions, we can start by considering the North Sea region. This area has yielded some of the best documentary sources for climate reconstructions. They allow environmental historians like me to dig into exactly the kinds of weather that grew more common with the onset of the Maunder Minimum. In Dutch documentary evidence, for example, we see a noticeable cooling trend in average seasonal temperatures that begins around 1645. On the surface of things, it seems like declining solar activity and climate change are very strongly correlated.
And yet, other weather patterns seem to change later, one or two decades after the onset of regional cooling. Weather variability from year to year, for example, becomes much more pronounced after around 1660, and that erraticism is often associated with the Maunder Minimum. Severe storms were more frequent only by the 1650s or perhaps the 1660s, and again, such storms are also linked to the Maunder Minimum climate. In the autumn, winter, and spring, easterly winds – a consequence, perhaps, of a switch in the setting of the North Atlantic Oscillation – increased at the expense of westerly winds in the 1660s, not twenty years earlier.
A depiction of William III boarding his flagship prior to the Glorious Revolution of 1688. Persistent easterly, "Protestant" winds brought William's fleet quickly across the Channel, and thereby made possible the Dutch invasion of England. For more, read my forthcoming book, "The Frigid Golden Age." Source: Ludolf Bakhuizen, "Het oorlogsschip 'Brielle' op de Maas voor Rotterdam," 1688.
All of these weather conditions mattered profoundly for the inhabitants of England and the Dutch Republic: maritime societies that depended on waterborne transportation. Rising weather variability made it harder for farmers to adapt to changing climates, but often made it more profitable for Dutch merchants to trade grain. More frequent storms sank all manner of vessels but sometimes quickened journeys, too. Easterly winds gave advantages to Dutch fleets sailing into battle from the Dutch coast, but westerly winds benefitted English armadas. If we define the Maunder Minimum as a climatic regime, not (just) a period of reduced sunspots, and if we care about its human consequences, what should we conclude? Did the Maunder Minimum reach the North Sea region in 1645, or 1660?
These problems grow deeper when we turn to the rest of the world. Across much of North America, temperature fluctuations in the seventeenth century did not closely mirror those in Europe. There was considerable diversity from one North American region to another. Tree ring data suggests that northern Canada appears to have experienced the cooling of the Maunder Minimum. Western North America also seems to have been relatively chilly in the seventeenth century, although there chillier temperatures probably did not set in during the 1640s.
By contrast, cooling was moderate or even non-existent across the northeastern United States. Chesapeake Bay, for instance, was warm for most of the seventeenth century, and only cooled in the eighteenth century. Glaciers advanced in the Canadian Rockies not in the seventeenth century, but rather during the early eighteenth century. Their expansion was likely caused by an increase in regional precipitation, not a decrease in average temperatures.
Still, the seventeenth century was overall chillier in North America than the preceding or subsequent centuries, and landmark cold seasons affected both shores of the Atlantic. The consequences of such frigid weather could be devastating. The first settlers to Jamestown, Virginia had the misfortune of arriving during some of the chilliest and driest weather of the Little Ice Age in that region. Crop failures contributed to the dreadful mortality rates endured by the colonists, and to the brief abandonment of their settlement in 1610.
Moreover, many parts of North America do seem to have warmed in the wake of the Maunder Minimum, in the eighteenth century. This too could have profound consequences. In the seventeenth century, settlers to New France had been surprised to discover that their new colony was far colder than Europe at similar latitudes. They concluded that its heavy forest cover was to blame, and with good reason: forests do create cooler, cloudier microclimates. Just as the deforestation of New France started transforming, on a huge scale, the landscape of present-day Quebec, the Maunder Minimum ended. Settlers in New France concluded that they had civilized the climate of their colony, and they used this as part of their attempts to justify their dispossession of indigenous communities.
Despite eighteenth-century warming in parts of North America, the dates we assign to the Maunder Minimum do look increasingly problematic when we look beyond Europe. If we turn to China, we encounter a similar story. Much of China was actually bitterly cold in the 1630s and early 1640s, before the onset of the Maunder Minimum elsewhere. This, too, had important consequences for Chinese history. Cold weather and precipitation extremes ruined crops on a vast scale, contributing to crushing famines that caused particular distress in overpopulated regions. The ruling Ming Dynasty seemed to have lost the “mandate of heaven,” the divine sanction that, according to Confucian doctrine, kept the weather in check. Deeply corrupt, riven by factional politics, undermined by an obsolete examination system for aspiring bureaucrats, and scornful of martial culture, the regime could adequately address neither widespread starvation, nor the banditry it encouraged.
Climatic cooling caused even more severe deprivations in neighboring, militaristic Manchuria. There, the solution was clear: to invade China and plunder its wealth. The first Manchurian raid broke through the Great Wall in 1629, a warm year in other parts of the Northern Hemisphere. Ultimately, the Manchus capitalized on the struggle between Ming and bandit armies by seizing China and founding the Qing (or "Pure") Dynasty in 1644.
China under the Ming Dynasty was arguably the most powerful empire of its time. Even as it unravelled in the early seventeenth century, its cultural achievements were impressive, as this painting of fog makes clear. Source: Anonymous, "Peach Festival of the Queen Mother of the West," early 17th century.
This entire history of cooling and crisis predates the accepted starting date of the Maunder Minimum. Yet, the fall of the Ming Dynasty unfolded in one relatively small part of present-day China. Average temperatures in that region reached their lowest point in the 1640s. By contrast, average temperatures in the Northeast warmed by the middle of the seventeenth century. Average temperatures in the Northwest also warmed slightly during the mid-seventeenth century, and then cooled during the late Maunder Minimum.
Smoothed graphs that show fluctuations in average temperature across centuries or millennia give the impression that dating decade-scale warm or cold climatic regimes is an easy matter. Actually, attempts to precisely date the beginning and end of just about any recent climatic regime are sure to set off controversy. This is not only because global climate changes have different manifestations from region to region, but also because climate changes, as we have seen, involve much more than shifts in average annual temperature. Did the Maunder Minimum reach northern Europe, for instance, when average annual temperatures declined, when storminess increased, when annual precipitation rose or fell, or when weather became less predictable?
Historians such as Wolfgang Behringer have argued that, when dating climatic regimes, we should also consider the “subjective factor” of human reactions to weather. For historians, it makes little sense to date historical periods according to wholly natural developments that had little impact on human beings. Maybe historians of the Maunder Minimum should consider not when temperatures started declining, but rather when that decline was, for the first time, deep enough to trigger weather that profoundly altered human lives. When we consider climate changes in this way, we may be more inclined to subjectively date climatic regimes using extreme events, such as especially cold years, or particularly catastrophic storms. Dating climate changes with an eye to human consequences does take historians away from the statistical methods and conclusions pioneered by scientists, but it also draws them closer to the subjects of historical research.
In my work, I do my best to combine all of these definitions, and incorporate many of these complexities. I date climatic regimes by considering their cause – solar, volcanic, or perhaps human – and by working with statisticians who can tell me when a trend becomes significant. However, I also try to consider the many different kinds of weather associated with a climatic shift, and the consequences that extremes in such weather could have for human beings.
As you might expect, this is not always easy. I have long held that the Maunder Minimum, in the North Sea region, began around 1660. Increasingly, I find it easier to begin with the broadly accepted date of 1645, but distinguish between different phases of the Maunder Minimum. An earlier phase marked by cooling might have started in 1645, but a later phase marked by much more than cooling took hold around 1660.
These are messy issues that yield messy answers. Yet we must think deeply about these problems. Not only can such thinking affect how we make sense of the deep past, but it can also provide new perspectives on modern climate change. When did our current climate of anthropogenic warming really start? At what point did it start influencing human history, and where? What can that tell us about our future? These questions can yield insights on everything from the contribution of climate change to present-day conflicts, to the timing of our transition to a thoroughly unprecedented global climate, to the urgency of mitigating greenhouse gas emissions.
Behringer, Wolfgang. A Cultural History of Climate. Cambridge: Polity Press, 2010.
Brooke, John. Climate Change and the Course of Global History: A Rough Journey. Cambridge: Cambridge University Press, 2014.
Coates, Colin and Dagomar Degroot, “‘Les bois engendrent les frimas et les gelées:’ comprendre le climat en Nouvelle-France." Revue d'histoire de l'Amérique française 68:3-4 (2015): 197-219.
Dagomar Degroot, “‘Never such weather known in these seas:’ Climatic Fluctuations and the Anglo-Dutch Wars of the Seventeenth Century, 1652–1674.” Environment and History 20.2 (May 2014): 239-273.
Eddy, John A. “The Maunder Minimum.” Science 192:4245 (1976): 1189-1202.
Parker, Geoffrey. Global Crisis: War, Climate Change and Catastrophe in the Seventeenth Century. London: Yale University Press, 2013.
White, Sam. “Unpuzzling American Climate: New World Experience and the Foundations of a New Science.” Isis 106:3 (2015): 544-566.
Dr. Alan MacEachern, University of Western Ontario.
Western's Archives and Research Collections Centre (ARCC) storage room. Photo by Gabrielle Bossy.
In 2008, I had a meeting at the Environment Canada headquarters in Downsview, Ontario, and afterward staff gave me a tour. Since I’m a historian, they showed me the old stuff. Down in the basement – not quite the warehouse scene at the end of Raiders of the Lost Ark, but close enough – they led me along row after row of weather observations: all of the original paper forms and registers that since 1840 had been filled out by what would eventually be thousands of observers at thousands of weather stations across Canada. Environment Canada had long ago squeezed the quantitative data they wanted from the observations, and from it created an online National Climate Data and Information Archive. That may have actually put the physical collection more at risk; a teary librarian told of worrying she would return from vacation someday and find it had been thrown out. Staff were maintaining the collection as best they could, but they knew the facility was not up to archival standards – a massive steam pipe loomed menacingly nearby – and they were concerned about the lack of a long-term plan for it. The collection should rightly have gone to Library and Archives Canada (LAC), but in earlier decades the archives had expressed no interest in it and more recently had experienced an acquisitions freeze.
So without any real plan, let alone authorization, I offered to take the collection off Environment Canada’s hands.
Environment Canada weather stations, 1840-1960. Visualization by Josh MacFadyen, Arizona State University.
At the time, I was a dyed-in-the-wool environmental historian increasingly feeling that I had somehow neglected the most pressing environmental issue of our time, climate change. Helping protect a nationally-significant climate history collection seemed like good karma.
I went straight from Environment Canada to my university archives. Thankfully, a few years earlier the archives had moved into a new building containing a high density module capable of holding one million volumes. Thankfully, too, University Archivist Robin Keirstead was excited by the idea of having the collection come to Western University, so it could be better preserved, more accessible to researchers, and made available for teaching purposes. Robin and I formally contacted Environment Canada and LAC, expressing Western’s interest in receiving the collection.
It took years of negotiation, but what ultimately made the transfer happen was that some folks at Environment Canada thought these old records were priceless and others thought they were worthless, so both concluded it would be great if they were at Western.
In 2014, the collection arrived at Western on long-term loan – here is a full listing of it. There are several hundred volumes of correspondence, letterbooks, and journals related to Canadian meteorological and climatological history between 1828 and 1967. But the real jewels of the collection are the almost 900 archival boxes (an estimated 1.6 million pages) containing all of Environment Canada’s extant daily weather observations between 1840 and 1960. From what we could determine, this was the largest archival arrangement ever made between a Canadian university and the federal government.
Mission accomplished. …But now what?
“Super salubrious.” Howard D. Sloat, Jarvis, Ontario, August 1954, EC151, Environment Canada collection.
This was already a good news story as far as I was concerned, because the Environment Canada collection will be protected at archival standards indefinitely (presumably, until LAC is in a position to take it). But now that it was at my university, I wanted to see it used. I advertised its availability to researchers across Canada. I developed a climate history course that utilized it. And I considered what contributions neophyte climate history researchers – like my students, like me – could make with it.
To begin, we are focusing on the qualitative remarks that observers included alongside their quantitative data. Although Environment Canada long encouraged (or, in some eras, tolerated) observers’ remarks on such matters as extreme weather, farming conditions, and changing seasons, it had never figured out a way to utilize these remarks, including in its climate archive. This qualitative data remained untapped.
Students and I are working to change that. In the past year, we have begun creating a database of remarks from the collection. We are transcribing everything the observers thought worth observing (with the important exception that we are ignoring the hundreds of thousands of entries such as “Clear,” “Fair,” or “Rain”). There are many entries on crop conditions and the status of harvests, on smoke from forest fires, on Northern lights, on matters of local political or social interest. There are also many entries that offer insights into the history of the meteorological service itself.
“Hard maple in blossom. Oriole return. Swallows return. English Cherry blossom. Canaries return. Ice 3/16 inches ground. Orchards in blossom. Forest well leaved out. Fire flies seen. Crops all look well except corn it is yellow with the cold and wet.” Malcolm McDonald, Lucknow, Ontario, May 1902, EC172, Environment Canada collection.
But of special interest – both to the observers and to us – is phenological information. Phenology is the study of cyclical natural phenomena, and weather observers documented, often over the course of decades, the dates of ice break-up and freeze-up on rivers and lakes, when the first of various bird species appeared, when wildflowers bloomed, when spring peepers emerged. The observers were especially vigilant during what might be called the “phenological moment” of the late 19th and early 20th century, when Canadian individuals and learned societies became intent in gathering such information as a means of gaining biological and meteorological knowledge about their nation. With historians and climate scientists today seeking to verify older meteorological observations and to understand other ways of knowing climate, these observations assume new significance.
The database that Western History students and I are creating already has tens of thousands of tagged entries. In the near future, we will shift to the creation of a website that allows for geographical, temporal, and thematic searching of these observations, at micro- to macro- scales. Interested in Ajax, Ontario or in all of Canada? In your birthday or in a fifty-year timespan? In reports on earthquakes, orioles, or lilacs, or on all extreme weather, all fauna, all flora? We certainly hope to use this for research purposes, but our project’s ultimate goal is to make these observations available to climate researchers, and to the public, so that they make findings of their own. More good karma – climate research requires it.
Contact me at email@example.com if you have questions about the Environment Canada collection or research access to it.