Dr. Ruth Morgan, Monash University
Non-tabular iceberg off Elephant Island in the Southern Ocean. Source: Andrew Shiva, Wikipedia.
Ice, or a lack of it, is an “icon” of anthropogenic climate change. Earlier this year, researchers reported that a rift in Antarctica’s fourth-largest ice shelf has accelerated and could soon cause a vast iceberg to fall into the sea. After the collapse of the ice shelf, the glaciers that once sustained it will run into the sea. Glaciers like these, Mark Carey has observed, have become an “endangered species” of the Anthropocene. Yet only a few decades ago, Antarctic ice was the hero in a visionary episode of the planet’s recent “cryo-history”.
In October 1977, scientists met at Iowa State University to discuss the latest findings in the emerging field of “iceberg utilization”. Eager to promote the cause was conference co-sponsor Prince Mohammed al-Faisal of Saudi Arabia, who flew an iceberg weighing over two tonnes from the Portage Glacier Field near Anchorage, Alaska to Ames, Iowa for the occasion – producing at least 7 tonnes of carbon dioxide over the 5,000km journey. One local couple, who brought with them plastic bags, a bucket, and an ice-pick to the iceberg’s unveiling, told the New York Times, “I don’t know what we’ll do with it – serve it in drinks, I guess. We’ll have a cocktail party”.
A series of US television news features documenting the Iceberg Utilization Conference, October 1977. Source: YouTube / Special Collections and University Archives, Iowa State University.
These stunts amused onlookers, but they were no laughing matter for the researchers studying the possibility of towing Antarctic icebergs to arid and semi-arid climes. Iceberg utilization was a tantalizing prospect for solving one of the world’s pressing problems: global water shortages. In their controversial study The Limits to Growth, the interdisciplinary research group the Club of Rome had earlier warned that the availability of fresh water was a limit to growth that “will be reached long before the land limit becomes apparent”. Bolstering this neo-Malthusian prediction were the widely reported droughts in the Sahel, the Ukraine, and the failure of the Indian Monsoon during the early 1970s.
An excerpt from the public affairs program, Dimension 5, which aired on WOI-TV in central Iowa, USA, October 1977. Panellists include Prince Mohamed Al Faisal of Saudi Arabia, Henri Bader, Daniel J. Zaffarano, Richard L. Cameron, and Ed Cronick. Source: Youtube / Special Collections and University Archives, Iowa State University.)
These anxieties were the focus of the 1977 United Nations Conference on Water in Mar del Plata, Argentina, where fresh water was declared a “scarce asset” that demanded coordinated resource development and management. Among the options discussed to increase water supplies were so-called “complex technologies” and “non-conventional methods”, such as seawater desalination. By the late 1970s desalination was already well established in Kuwait, and Saudi Arabia was eager to replicate its neighbour’s success. Leading this mission (at least until Antarctic icebergs beckoned) was the head of the Saudi Saline Water Conversion Corporation: Prince Mohamed al-Faisal. He shared his vision with the Christian Science Monitor, “Over a period, we would hope to change the vegetation and climate in some coastal areas”.
The Prince’s idea was several decades in the making. The prospect of using icebergs to modify local climates and to provide endless water supplies to the world’s thirstiest regions had emerged in the decade after the Second World War. In a 1949 class at the Scripps Institution of Oceanography in California, oceanographer John Isaacs had speculated on the subject, and later expanded on his thinking in the February 1956 issue of Science Digest. He proposed floating an Antarctic iceberg along the Humboldt Current to the coast of southern California from where it could supply water to Los Angeles.
The feasibility of such a scheme had been confirmed in 1969, when glaciologist Willy Weeks and geophysicist Bill Campbell surprised even themselves when they concluded that towing icebergs to arid lands was “within the reach of existing technology”. They based their calculations on a large tabular iceberg that was twice the size of the Great Pyramid of Giza, which was less likely to roll in transit and more likely to be found near the Antarctic than the Arctic. The optimum routes for towing such an iceberg, they suggested, were from the Amery Ice Shelf to southwestern Australia and from the Ross Ice Shelf to the Atacama Desert.
“Optimum towing paths between the Amery Ice Shelf and Australia and the Ross Ice Shelf and the Atacama Desert.” Fig. 8, Weeks and Campbell, 1973, p. 220.
In 1973, the National Science Foundation and the Rand Corporation sponsored a subsequent report on the feasibility of southern California for such a scheme. Antarctic icebergs could supply water for urban, industrial and agricultural demands, while helping to abate the growing thermal pollution of the industrialized region. According to their estimates, towing an iceberg from the Ross Sea to the Pacific southwest would be significantly cheaper than inter-basin water transfers and desalination. Furthermore, nuclear energy could be used, which would alleviate the need to use fossil fuels during a decade of uncertain oil supplies.
The possibility of endless water supplies was too good to ignore and the Saudi prince assembled experts from around the world to advance the field of “iceberg utilization”. His 1977 conference in Iowa attracted scientists from arid and semi-arid countries such as Egypt, Greece and Libya, as well as nations with polar territories, such as Australia, Chile and Canada. Nearly three quarters of the attendees were from the United States, most of whom were associated with the military-industrial-academic complex. They included researchers from the Jet Propulsion Laboratory, Tetra Tech International, the Lawrence Berkeley Laboratory, the US Army Cold Regions Research and Engineering Laboratory, and the Naval Weapon Centre.
The lone woman speaking at the conference was the pioneering meteorologist, Joanne Simpson from the University of Virginia, Charlottesville. Simpson had been director of the experimental meteorology laboratory of the National Oceanic and Atmospheric Administration and member of the Weather Modification Advisory Board. Two decades of studying the intersections of cloud physics with hurricane research informed her comparison of Antarctic icebergs to cloudseeding, as well as her study of the atmospheric impacts of iceberg utilization. Although towing an iceberg would cost more than cloudseeding, she estimated that its meltwater would more than make up for the expense. In icebergs, Simpson also saw a means to mitigate the toll of tropical hurricanes. Using an iceberg to lower the surface temperature of the ocean ahead of an advancing hurricane would help to reduce the destructive winds of the hurricane.
“Illustration of possible new approach to the hurricane mitigation aspect of weather modification. Hurricanes are known to diminish in strength when they move over cooler water, here shown hypothetically to be supplied by a melting iceberg.” Source: Fig. 5, in Simpson, 1978, p. 865. Artist: Tom Henderson.
Simpson was well aware of the credibility gap that such endeavours faced. In 1978 she wrote, “For meteorology as a whole, public overheated controversy on weather modification gives the entire profession an image of ridiculous bumblers or even charlatans”. But the opportunity to “serve humanity” outweighed these concerns and she welcomed alternative modification methods.
Despite the promise of iceberg utilization, its potential impact on local climates became one of the many reasons why the vision did not become a reality. In Australia, for instance, enthusiastic plans for the continent’s southwest were rejected in the mid-1980s on the grounds that an iceberg “parked offshore for several years” might affect the regional climate in unexpected and unwanted ways. Peter Schwerdtfeger, the scheme’s Australian proponent, lamented that its feasibility lay not in science and technology, but in “politically and economically based decisions”. He remained confident, however, that iceberg utilisation would occur when “individual nations recognise their obligations to the more thirsty segment of mankind” and choose to exploit the Antarctic icebergs that otherwise “melt pointlessly in the Southern Ocean”. According to this logic, the failure to take advantage of the icebergs was tantamount to wasting precious water resources.
The possibility of iceberg utilization was one of many post-war technological visions. The futurism and science fiction of the atomic age urged the exploration and exploitation of new planetary frontiers such as the deep ocean and outer space. In the Cold War context, measuring, monitoring and manipulating the physical environment on a global scale had the potential to fulfil both military and peaceful ambitions. The iceberg “visioneers” were bit players in a wider debate about the Earth’s future, one that pitted the constraints of ecological limits against the possibilities of technological innovation. Just as the atom offered an inexhaustible source of cheap energy, Antarctica was a cornucopia of renewable fresh water simply awaiting the application of human ingenuity. Four decades later, we are searching for ways to keep that water well and truly locked up.
Al-Nakib, Farah, Kuwait Transformed: A History of Oil and Urban Life (Palo Alto, CA: Stanford University Press, 2016).
Behrman, Daniel with John D. Isaacs, John Isaacs and His Oceans (Washington, DC.: ICSU Press, 1992).
Carey, Mark, “The History of Ice: How Glaciers Became an Endangered Species,” Environmental History 12 (2007): 497-527.
Carey, Mark, M. Jackson, Alessandro Antonello and Jaclyn Rushing, “Glaciers, Gender, and Science: A Feminist Glaciology Framework for Global Environmental Change Research,” Progress in Human Geography 40, no. 6 (2016): 770-93.
Fleming, James R., Fixing the Sky: The Checkered History of Weather and Climate Control (New York: Columbia University Press, 2010).
Gosnell, Mariana, Ice: The Nature, the History, and the Uses of an Astonishing Substance (Chicago: University of Chicago Press, 2005).
Hamblin, Jacob Darwin, Arming Mother Nature: The Birth of Catastrophic Environmentalism (New York: Oxford University Press, 2013).
Harper, Kristine C., Make it Rain: State Control of the Atmosphere in Twentieth-Century America (Chicago: University of Chicago Press, 2017).
Hult, J.L. and N.C. Ostrander, Antarctic Icebergs as a Global Fresh Water Resource (Santa Monica, CA: Rand, 1973).
Husseiny, A.A. (ed.), Iceberg Utilization: Proceedings of the First International Conference and Workshops on Iceberg Utilization for Fresh Water Production, Weather Modification, and Other Applications, held at Iowa State University, Ames, Iowa, USA, October 2-6, 1977 (New York: Pergamon Press, 1978).
Jones, Toby Craig, Desert Kingdom: How Oil and Water Forged Modern Saudi Arabia (Cambridge, MA: Harvard University Press, 2010).
Leslie, Stuart W., The Cold War and American Science: The Military-Industrial-Academic Complex at MIT and Stanford (New York: Columbia University Press, 1993).
McCray, W. Patrick, The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies and a Limitless Future (Princeton: Princeton University Press, 2013).
Rozwadowski, Helen M., “Arthur C. Clarke and the Limitations of the Ocean as a Frontier,” Environmental History (2012): 1-25.
Sabin, Paul, The Bet: Paul Ehrlich, Julian Simon, and Our Gamble over Earth’s Future (New Haven, CT: Yale University Press, 2013).
Schmidt, Jeremy J., Water: Abundance, Scarcity, and Security in the Age of Humanity (New York: NYU Press, 2017).
Schwerdtfeger, Peter, “The Development of Iceberg Research and Potential Applications,” Polar Geography and Geology 9, no. 3 (1985): 202-209.
Simpson, Joanne, “What Weather Modification Needs – A Scientist’s View,” Journal of Applied Meteorology 17 (1978): 858-66.
Sörlin, Sverker, “Cryo-History,” in The New Arctic, (eds.) Birgitta Evengård, Joan Nymand Larsen and Øyvind Paasche (New York: Springer, 2015), pp. 327-39.
Weeks, Wilford J. and William J. Campbell, “Icebergs as a Freshwater Source: An Appraisal,” Journal of Glaciology 12, no. 65 (1973): 207-33.
Patrick Gage, Georgetown University
People care about climate change when it affects them. That is why Pacific islanders fear rising sea levels more than the average American, and why many who live in coastal cities fear a projected increase in tropical cyclones more than those further inland. Yet the idea that an environmental change “over there” will not affect communities “here” actually makes little sense. History is rife with examples of human crises brought on by seemingly distant climatic events.
One of the clearest examples unfolded in late nineteenth century Northeastern Brazil (Nordeste). A powerful El Niño-Southern Oscillation (ENSO) event warmed the waters of the equatorial Pacific Ocean, changing atmospheric circulation in ways that brought extreme rain shortages to Brazil, and ultimately launched the nation’s first rubber boom. The Grande Seca, or “Great Drought,” of 1877-1878 not only killed hundreds of thousands of northeasterners (nordestinos), but also sparked massive internal migration. The latter proved particularly problematic for the state of Ceará, from which thousands emigrated. Cearenses thus provided rubber barons in nearby Amazonas and Pará an invaluable supply of cheap labor, which they needed to meet growing demand. By 1900, the country exported more rubber than any other commodity except coffee. El Niño therefore shaped the history of Brazil.
ENSO events affect the global environment on an irregular basis. Typically, Peru’s cold Humboldt Current flows northward along the South American coast before easterly trade winds push it west along the equator. Warmed by the sun, its waters increase in temperature as they approach Indonesia, making the western Pacific hotter than the east. El Niño reverses these trends: trade winds and the Humboldt’s westward flow subside, westerly winds pick up, Kelvin waves carry warm water from Asia to South America in a process called “advection,” and hot, humid air masses travel toward Peru and Ecuador. Sea temperature in the eastern equatorial Pacific subsequently rises, causing changes in precipitation across the Americas. While coastal Peru faces torrential rain, Brazil’s Nordeste experiences severe drought. The distrinct relationships, or teleconnections, between ENSO and local climates generate different phenomena depending on the region. When Western Canadians enjoy an unusually warm winter, for example, Western Europeans may endure an especially cold one.
El Niño and drought in Northeastern Brazil therefore often coincide, but not always. The Brazilian Northeast has struggled with intermittent drought for centuries. Although its sugar- and cotton-heavy coast generally receives sufficient rain, the region suffered no fewer than forty-four unique dry spells between 1557 and 1992, or approximately one every ten years. Removing an abnormally wet period from 1615-1691 reduces that average to once per eight. What is more, of the fifteen so-called “major” droughts—those spanning at least two consecutive summers—only six occurred before 1800, implying a quantitative and qualitative increase over the past 200 years. While some of these dry spells occurred in concert with ENSO, many did not. Water shortages plague the Nordeste regardless of ocean temperature.
Different droughts affected the water-dependent Northeast differently. Though many were forgotten, some left indelible marks, none more than the Grande Seca. From 1877 to 1878, two “very strong” El Niño years dramatically increased water shortages and decimated the Nordeste, killing livestock and people by the tens of thousands. Ceará suffered most. As cattle and crop losses wiped out food supplies, the state’s death toll mounted. By 1878, 175,000 Cearenses had perished. All told, at least 500,000 nordestinos died and three million fled their homes. Newspapers from Ceará described the tragedy in heart-wrenching detail.
On 6 January 1877 (mid-summer), Cearense noted the first signs of hardship: “The lack of rains is already being felt. From Sobral and other … points of the province they tell us … the drought is … causing considerable damage.” Desperate letters painted a dismal picture. On 11 March, one man in Crato wrote: “We are with a terrible drought … and only God knows how painful this scourge will be.” Relayed another from Caixoçó: “The drought is ravaging everything, the mortality of cows is astonishing.”
The situation did not improve as March and the late rainy season became early winter. One correspondent from Assaré feared complete human annihilation in the surrounding countryside, while O Retirante (“The Refugee”) lamented the “emaciated bodies of our little children, wives and fathers.” A letter published several days before Christmas ended 1877 on a depressing note: “Already we are in the middle of December and not any rain! The drought with all its procession of horrors proceeds, threatening to swallow everything.”
The Grande Seca officially ended in 1878, but its effects lasted far longer. The drought crippled Northeastern sugar barons, who had watched their investments wither since the early 1800s. Cotton growers, whose business boomed during and after the American Civil War (1861-1865), likewise faced renewed headwinds, while cattle ranchers counted their losses in the hundreds of thousands of heads. The deadliest drought in Brazilian history, exacerbated by two consecutive years of exceptionally strong El Niño, therefore had a significant economic impact on the Nordeste, draining it of much-needed capital and contributing to the region’s lackluster development.
Above all, drought victims needed jobs, especially in Ceará. As an 11 March 1877 letter from Icó indicated, people often died “not because there [was] an absolute lack of foods, but because there [was] nothing with which to buy them.” Millions of desperate Cearenses therefore migrated to major population centers, hoping to find work. Among emigrants’ limited options, Brazil’s burgeoning rubber industry proved particularly appealing, both for its relatively high wages and geographical proximity.
Based in the Amazon Valley, namely the states of Amazonas and Pará, Brazilian rubber production did not begin until the late 1700s, after French explorer Charles Marie de La Condamine first watched natives use a “milky, viscous liquid” from the Hevea Braziliensis tree to make boots, toys, and bottles. Fueled by what amounted to a minor “gold rush,” exports of raw rubber and rubber products grew steadily through the early 1800s. The trade took off when Charles Goodyear discovered vulcanization in 1839, which made rubber resistant to extreme temperatures. Exports jumped from 388,260 kg in 1840 to 2,673,000 kg in 1860. Nevertheless, rubber remained largely irrelevant in Brazil until its first boom in the 1880s, when price increases and an influx of cheap labor pushed the commodity’s export share to 10 percent. That number soared to 39 percent by 1910. Brazil’s natural claim to Hevea made it the world’s largest producer for three decades
Despite remarkable success, Brazilian rubber barons faced constant labor shortages throughout the late nineteenth and early twentieth centuries. The Grande Seca thus benefitted them immensely. Starving Cearenses, whom the rubber industry “desperately needed,” cared little about working conditions as long they were paid, and so accepted jobs few others dared to take—among them tapping Hevea trees in a hot, disease-ridden rainforest.
During the Grande Seca, Ceará became a key state for labor recruiters from Amazonas and Pará. In 1916, Joseph Woodroffe, a European eyewitness, claimed immigration to the Amazon Valley consisted exclusively of Cearenses, largely in response to the drought. Weinstein, Barham and Coomes, Caviedes, and Resor also acknowledge the Grande Seca’s role in driving poor Cearenses to the jungle, where they supported plantations as cheap tappers (seringueiros). But despite catastrophic death tolls from 1877 onward, emigration did not find universal support in Ceará. On the contrary, Cearense and its editors openly opposed the state’s depopulation for economic and humanitarian reasons.
Cearense arranged the debate as follows. On 15 April 1877, an “enlightened friend” in Sobral noted: “We continue to think … one of the most useful ways of applying aid, to which the State is obligated, would be … to promote seriously the emigration of our population to more fertile and almost unpopulated regions of other provinces.” Several pages later, however, a sordid column lamented the fact that thirty refugees had recently arrived in Fortaleza, Ceará’s capital, and hoped to reach the Amazon Valley. “This idea of emigration to other provinces,” the author mused, “is of incalculable disadvantages to Ceará.” Cearense’s publishers agreed, as future editions only “supported” emigration insofar as they acknowledged opposing views and occasionally allowed independent writers to criticize their claims.
The paper solidified its stance on 18 April. Emigration to Amazonas and Pará, it argued, was “harmful … to [Ceará] … because it [ripped out] a large number of strong arms for plowing.” Over the next seven months, such fear came up time and again. In July, for example, one writer professed concern for the state’s future: “…supposing [the drought] is transitory, how will we repopulate our deserted hinterlands if we remove … by means of a broad emigration, their natural habitants?” Together, these columns typified a standard economic argument against outmigration, namely that Ceará would need people to rebuild once the Grande Seca passed, and therefore could not absorb any more losses than necessary. But this only explains some of Cearense’s hostility toward open borders.
Though principally worried about Ceará’s financial prospects, educated nordestinos also expressed sympathy for destitute workers. Cearense printed articles throughout 1877 noting that rubber jobs in Amazonas and Pará were difficult and exploitative. On 18 April, the paper published several letters from Father José Thomaz, “who painted with blackest colors the luck of the poor emigrant, who is there [in the Amazon] reduced to the hardest and cruelest captivity by the rubber tappers to whom he hires his services.” Another pundit claimed Cearenses who left for Amazonas would likely “perish in the swamps.”
As more reports of emigration made their way into Cearense, so too did overt warnings. “Our wretched brothers who have gone to [Amazonas] have suffered horrible trials,” wrote one author on 18 October. Yet faced with certain death by disease or starvation, Cearenses continued to flee. By 23 September, at least 1,552 had crossed into the Amazon Valley, followed by hundreds more before the end of the year. Most left for rubber plantations.
Cearenses migrated by the thousands to Amazonas and Pará at the same time Brazil’s first rubber boom began (early 1880s). Those dates are no coincidence. While Amazonian elites owed their success to many different factors, drought-stricken nordestinos provided the foundation. Without adequate labor, there would never have been a rubber industry, let alone a profitable one.
Late nineteenth and early twentieth century Brazilian rubber production had far-reaching environmental consequences. When Emperor Pedro II created the province of Amazonas in 1850, Manaus, its capital, comprised little more than “a small collection of mud huts.” That changed rapidly as speculators flooded the region. The Amazonian North’s population quadrupled from 250,000 in 1853 to almost one million in 1910. Manaus and its Paraense counterpart, Belém, benefitted immensely: electricity, streetcars, exquisite theaters, and large ports graced the once-barren cities. Countless new rubber trails cut through the rainforest as well, in addition to increased traffic on the river. That said, the industry’s initial emphasis on wild Hevea trees delayed mass deforestation for several decades, while industrial cattle ranching, which would have required a dramatic physical reorganization of the Amazon Valley, lacked sufficient investment.
Droughts have shaped Northeastern Brazil for centuries, yet the Grande Seca stands out. Not only was it longer and drier than most, but it also came at a time of profound demographic and economic transformation in Brazil. That increased its death toll and its consequences for the human and environmental histories of Brazil.
The past, like the present, proves Earth’s interconnectedness. Environmental shifts “over there” will eventually affect us “here.” More than one hundred years ago, warming water in the Pacific Ocean changed the course of Brazilian history, driving extraordinary investment in the previously untapped Amazon Valley. In the same way, natural disasters, rising seas levels, and other symptoms of global warming will inevitably influence how all of us live our lives, regardless of geography.
There is no running away. We must face this crisis together.
Barham, Bradford L., and Oliver T. Coomes. Prosperity’s Promise: The Amazon Rubber Boom and Distorted Economic Development. Boulder: Westview Press, 1996.
Burns, E. Bradford. A History of Brasil: Third Edition. New York: Columbia University Press, 1993.
Caviedes, César N. El Niño in History: Storming Through the Ages. Gainesville: University Press of Florida, 2001.
Cearense. Fortaleza, Ceará. 1877. Available at: http://bndigital.bn.gov.br/hemeroteca-digital.
Glantz, Michael H. Currents of change: El Niño’s impact on climate and society. Cambridge: Cambridge University Press, 1996.
Gergis, Joëlle L., and Anthony M. Fowler. “A history of ENSO events since A.D. 1525: implications for future climate change.” Climatic Change 92, nos. 3-4 (2009): 343-387.
O Retirante. Fortaleza, Ceará. 1877. Available at: http://bndigital.bn.gov.br/hemeroteca-digital.
Pedro II. Fortaleza, Ceará. 1877. Available at: http://bndigital.bn.gov.br/hemeroteca-digital.
Quinn, William H. “A study of Southern Oscillation-related climatic activity for A.D. 622-1900 incorporating Nile River flood data.” In El Niño: Historical and Paleoclimatic Aspects of the Southern Oscillation, edited by Henry F. Diaz and Vera Markgraf, 119-150. Cambridge: Cambridge University Press, 1992.
Resor, Randolph R. “Rubber in Brazil: Dominance and Collapse, 1876-1945.” The Business
History Review 51, no. 3 (1977): 341-366.
Villa, Marco Antonio. Vida e morte no sertão: História das secas no Nordeste nos séculos XIX e XX. Editora Ática: São Paulo, 2000.
Weinstein, Barbara. The Amazon Rubber Boom: 1850-1920. Stanford: Stanford University Press, 1983.
Woodroffe, Joseph F. The Rubber Industry of the Amazon and How Its Supremacy Can Be Maintained. Edited by Harold Hamel Smith. London: T. Fisher Unwin and Bale, Sons and Danielsson, 1916. Available at: https://archive.org/details/rubberindustryof00woodrich.
Dr. Bathsheba Demuth, Brown University
Most students at Brown University know Professor Kathleen Hess from the two-semester challenge of organic chemistry. But in a class that debuted this fall, “Exploration of the Chemistry of Renewable Energy,” Dr. Hess blended the tools of her discipline with questions of human impacts on the climate, renewable energy technologies, and the social impact of how energy is generated and used. The result is a socially-engaged course blending social science and bench science. “I thought this would be a perfect way to teach students who were not science majors,” Hess explains. “That was my goal.”
Courses on climate or energy history, renewable energy, and the relationship between climate and society are now taught at universities and colleges across the country. Most are designed by faculty in humanities, earth science, or engineering departments. Hess’s class offers a new model. Inspired by the Chemistry Collaborations, Workshops, and Communities of Scholars (cCWCS) pedagogy seminars, Hess's syllabus combines interdisciplinary readings, guest lectures, writing assignments - and laboratory experiments. “I wanted to give students both background on the topic,” she says, “and then give them the hands-on experiment so they would have practical experience.”
The course began by examining why renewable energy sources are increasingly important. Students read about fossil fuel pollution, climate change, and energy politics. They also did lab experiments to calculate how much energy is required to light a classroom. Then the syllabus moved on to examine batteries, fuel cells and solar panels. Hess framed each topic around a question. “Scientists should always be enquiring rather than saying we’re just going to the lab to make such and such,” Hess says. In one case, the class spent several weeks researching sources and uses of biofuel energy. Then students went to the lab to make fuel out of food waste from the Brown dining halls. “The students were really excited about this,” Hess notes. But when the class compared the energy yield to other fuels, “there was a lot of ‘oh, this is why we don’t do this,’” Hess says. “It was more of an illustration than just looking at another graph, because they saw and understood the processes involved.”
In another case, students produced acid rain in a petri dish. Unlike history or policy classes, where acid rain is a topic – or most chemistry classes, where experiments are done in solution - Hess’s students saw “how concrete and bridges erode, and saw how materials travel through the air.” Students designed experiments to measure individual carbon emissions. In another experiment the class made their own hydrogen fuel cells. It required working with hydrochloric acid. Hess says hands-on exercises like this generated a great deal of student enthusiasm – not just energy between fuel cells – but were also complex and delicate. This was sometimes a challenge for students not used to the lab sciences. “Sometimes just getting ready to do the labs,” Hess says, “took some time and explaining. Sometimes they didn’t know how to start. So there could be a bit of inertia there.” Overall, however, Hess found “the level of student interest was really high. At the end of the course of the students told me that none of the lab assignments felt like homework, because they were so enjoyable.”
Across case studies, Hess linked the experiments back to social, political, and economic questions. Hess says her class arrived with “quite a few preconceived notions about why people believe in global warming or not, why they’re interested in renewable energy or not.” Through readings and lectures that covered climate change, the development of the current energy grid, the history of the electric car, the use of solar panel systems, and how humans have used different energy sources in the past, students started thinking about “how none of them have ever lived without power – without a light switch to turn on.” Student read about everything from global energy transitions to oil company correspondence about fossil fuel development. “I wanted them to see that we can always judge why people use the resources they do,” Hess explains, “but there are multiple sides to the story.”
Seeing these multiple sides helped students understand how the physical principles and technologies they were learning about in the lab “was one thing, but how to incorporate it into society is another,” Hess says. She had each student choose a renewable technology – from algal biofuels to concentrated solar – and design a brochure to convince consumers to use a new source of energy. Students also presented the results of their alternative energy research to the class. For Hess, this was the most inspiring part of the course. As each student learned to combine their technical knowledge from labs with their research on specific fuels, she says “they felt that was encouraging because they had to come up with an alternative energy to talk about, and knew collectively about all these different options.”
While thinking about climate change and the future is often discouraging and leaves individuals unsure how to respond, Hess found this course affirmed her sense that “education is the first step away from not knowing what to do. Especially mindful education where we don’t just judge things, but examine the combination of physical processes and assumptions that make them happen.” The best approaches to teaching climate change often combine perspectives from many disciplines, from the sciences to the humanities.
Dr. Dagomar Degroot, Georgetown University
The world is warming, and it is warming fast. According to satellites and weather stations, Earth's average annual temperature will smash the instrumental record this year, likely by around 0.1° C. Last year, global temperatures broke the record by around the same amount. That may not seem impressive, but consider this: temperatures have climbed by about 0.1° C per decade since the 1980s. In just two years, therefore, our planet catapulted two decades into a hotter future.
Global climate change on this scale, with this speed, is unprecedented in the history of human civilization. Yet that history has still coincided with other, smaller but still impressive changes in Earth's climate. Humans may have played a minor role in some of these changes. The key culprits, however, were often violent explosions on Earth that coincided with periods of unusual solar activity. The most dramatic climate changes usually involved global cooling, not warming. The consequences for communities and societies around the world could be profound, in ways that offer lessons for our fate in a changing climate.
One of the coldest periods in the history of human civilization started in the early sixth century CE. Growth rings imprinted in the bark of trees suddenly narrow around 536 CE, and again around 541 CE. This narrowing reveals that trees practically stopped growing as Northern Hemisphere temperatures plunged by as many as 3°C, relative to long-term averages.
Other scientific "proxy" sources that responded to past climate changes reveal the same trend. A large team of interdisciplinary scholars, led by Ulf Büntgen, recently concluded that 536 CE was the first year of a "Late Antique Little Ice Age" - not to be confused with the better-known Little Ice Age of the early modern period - that chilled the Northern Hemisphere and perhaps the globe until 660 CE.
What could have caused this cooling? Cosmogenic isotopes tell us that solar activity had been falling for more than a century, as the sun gradually entered a "grand solar minimum." But that does not explain why Earth's climate changed so profoundly, and so abruptly, in the early sixth century CE.
Scientists now believe that ice cores containing traces of volcanic ash provide compelling evidence for a remarkable series of major eruptions, in 536, 540, and 547 CE. Big volcanic eruptions in the tropics can cool the Earth by releasing sunlight-scattering sulphur into the atmosphere. Trade winds swirling up from the equator bring this sulphur into both hemispheres, which ultimately creates a global volcanic dust veil. When eruptions happen in quick succession, Arctic sea ice can expand dramatically. Since bright sea ice reflects more sunlight than water, the Earth cools in response, which of course leads to more sea ice, more cooling, and so on.
Catastrophic volcanic eruptions, coinciding as they did with a prolonged decline in solar activity, may well have released enough aerosols into the atmosphere to usher in a much cooler climate. Yet sixth-century layers in Greenlandic ice cores may also suggest a very different, and even more exotic, culprit for climatic cooling.
Somehow, microscopic marine organisms of a kind normally found near tropical coasts ended up in ice layers that correspond to 536 and 538 CE. Layers dating from 533 CE also hold nickle and tin, substances that rarely appear in Greenlandic ice. Both metals are common in comets, however.
A team of scientists led by Dallas Abott recently concluded that dust from the tail of Halley's Comet may have started cooling the Earth as early as 533 CE. By reconstructing the past orbits of the comet, scientists discovered that it made a particularly close pass around the Sun in 530 CE. At around that time, Chinese astronomers recorded a remarkably bright comet in the night sky.
Earth regularly passes through debris left in the wake of Halley's Comet, and that debris might have been especially dense in the 530s and 540s. Meteor showers, therefore, may well have left cooling dust in the atmosphere, and metals in the ices of Greenland.
Tidal forces created by the gravity of a massive object - such as the Sun - can easily fragment cometary nuclei, most of which are collections of rubble left over from the primordial solar system. Dust released by such a breakup can dramatically brighten a comet. Perhaps that is what Chinese scientists witnessed in 530 CE, as Halley's Comet swung around the Sun.
ccording to Abbott and her coauthors, a piece of the comet may then have collided with Earth, launching sea creatures high into the atmosphere. Melted metal and gravity anomalies in the Gulf of Carpentaria off Australia suggest that an impact happened there sometime in the first millennium CE. At around the same time, aboriginal Australians etched symbols into caves that may well have represented comets.
It may well be that an extraordinary confluence of extraterrestrial impacts and volcanic eruptions, coinciding with a gradual fall in solar activity, chilled the Earth in the 530s and 540s CE. These dramatic environmental changes naturally astonished contemporary writers. In 536 CE, Procopius of Caesarea, a major scholar of the Eastern Roman Empire, wrote that the “sun gave forth its light without brightness, like the moon.” According to John of Ephesos, “there was a sign in the sun the like of which had never been seen and reported before in the world . . . The sun became dark and its darkness lasted for one and a half years."
A Syrian chronicler recorded that "The earth and all that is upon it quaked; and the sun began to be darkened by day and the moon by night." Chinese astronomers lost sight of Canopus, one of the brightest stars in the night sky. If there was a dust veil, it may well have been thick enough to obscure the heavens, whatever its origins.
Cassiodorus, a Roman statesman in the service of the Ostrogoths, wrote perhaps the most striking descriptions of the changes in Earth's atmosphere. "Something coming at us from the stars," he explained, had led to a "blue colored sun," a dim full moon, and a "summer without heat." Amid "perpetual frosts" and "unnatural drought," plants refused to grow and "the rays of the stars have been darkened." The cause, to Cassiodorus, must be high in the atmosphere, for "things in mid-space dominate our sight," and the "heat of the heavenly bodies" could not penetrate what seemed like mist.
Of course, we must guard against the assumption that observers such as Cassiodorus or Procopius simply recorded what they saw in the natural world. Descriptions of environmental calamities in ancient, medieval, and even early modern texts can be allegorical, representing social, not environmental developments. Still, many authors wrote eerily similar accounts of the real environmental upheavals in the 530s CE. To the modern eye, that of Cassiodorus in particular may seem to add evidence for a cometary cause of contemporary cooling.
As temperatures plummeted and plants withered, communities around the world suffered. Scientists have examined pollen deposits that reveal sharp drops in the extent of cultivated land across Europe. Shorter growing seasons probably led to food shortages and famines that emptied once-thriving villages. Archaeological evidence suggests, for example, that Swedes abandoned most of their population centers in the sixth century, which were then swallowed by forests. Swedish survivors apparently created new towns in far smaller numbers, in upland areas removed from their former dwelling places.
Famines may have had particularly severe consequences across the densely populated Mediterranean. In 533 CE, just as cometary dust may have started entering Earth's atmosphere, the emperor of the Eastern Roman Empire, Justinian I, embarked on a costly campaign to restore the Western Empire. His subsequent wars in the Mediterranean, combined with a war against the Sassanid Empire that erupted in 540 CE, drew precious resources from the imperial countryside. As growing seasons declined, the demands of war compounded food shortages for millions of imperial citizens. Starvation spread through the empire, but worse was to come.
Malnutrition reduces fat-storing cells that produce the hormone leptin, which plays a key role in controlling the strength of the human immune system. In the sixth century, food shortages therefore weakened immune systems on a grand scale, leaving millions of people more vulnerable to disease. Those who survived famines also migrated to new towns or cities, increasing the likelihood that those infected with diseases would spread them.
Unfortunately for the inhabitants of what was left of the Roman Empire, Yersinia pestis, the pathogen behind the bubonic plague, was about to make its first appearance in Europe. From 541 to 542 CE, the “Plague of Justinian,” swept through both the Western and Eastern halves of the Roman Empire, killing as many as fifty million people. In a warmer, more stable climate, the death toll may well have been far lower.
Not surprisingly, Justinian's campaign to retake the Western Empire stalled after the early 530s CE, although the reunified Roman Empire did reach its maximum extent in the 550s CE. Imperial resources were stretched thin, however, and European kingdoms reversed most of the new conquests soon after Justinian's death.
Climatic cooling probably had cultural consequences, too. There are signs, for example, that religious activity surged across Scandinavia as temperatures plunged. In times of crisis, devout Scandinavians offered gold to their gods in a way we might find counterinuitive: by burying it. Dating these underground hoardes is tricky, but it seems that Scandinavians buried most of them in the sixth century CE. These burials contributed to a gold shortage in Scandinavia that would endure for centuries.
The great oral traditions of Norse mythological poetry also date from the sixth century. Most people have heard of Ragnarök: the "twilight of the gods" that ends with the Earth incinerated and reborn. Fewer have come across the concept of Fimbulvetr, the "mighty winter" that heralds the final battle of the gods.
The Prose Edda, a thirteenth-century transcription of Norse mythology, describes Fimbulvetr in vivid detail. “Then snow will drift from all directions," the Edda predicts. "There will then be great frosts and keen winds. The sun will do no good. There will be three of these winters together and no summer between.” According to the Poetic Edda, a collection of poems also committed to writing in the thirteenth century, “The sun turns black . . . The bright stars vanish from the sky.”
These precise descriptions of an apocalyptic winter have no parallel in other religious texts or mythical traditions. Instead, they echo the sixth-century reports of Cassiodorus, Procopius, and other astonished observers of real environmental transformations. Scandinavians fleeing their homes amid catastrophic cooling may well have felt like they were living through a preview of the apocalypse.
The trauma caused by sixth-century environmental changes may therefore be imprinted on Norse mythology. Ideas of a new world in the wake of Ragnarök may also reflect the consequences of real events, such as the new settlements and cultures that emerged amid climatic cooling.
Can these ancient calamities offer any lessons for our warmer future? Perhaps. They suggest, for example, that complex, densely populated societies, far from being insulated from the effects of climate change, may actually be most at risk. When populations brush up against the carrying capacity of agricultural land, sudden environmental shifts can be catastrophic. In these situations, societies already embroiled in resource-draining wars could be particularly vulnerable. The consequences of sixth-century cooling hint, also, that responses to even short-lived climatic upheavals can profoundly alter cultures in ways that endure for centuries, or even millennia.
Ancient societies, of course, have little similarity to our own. Yet their struggles in periods of dramatic climate change may still shed some light on our prospects in a warming world. To understand the future, we would be well served to look back at the distant past.
Abbott, Dallas H., Dee Breger, Pierre E. Biscaye, John A. Barron, Robert A. Juhl, and Patrick McCafferty. "What caused terrestrial dust loading and climate downturns between AD 533 and 540?." Geological Society of America Special Papers 505 (2014): 421-438.
Arjava, Antti. "The mystery cloud of 536 CE in the Mediterranean sources." Dumbarton Oaks Papers 59 (2005): 73-94.
Axboe, Martin. "The year 536 and the Scandinavian gold hoards." Medieval Archaeology 43 (1999).
Gräslund, Bo, and Neil Price. "Twilight of the gods? The ‘dust veil event’ of AD 536 in critical perspective." Antiquity 86:332 (2012): 428-443.
Hamacher, Duane W. "Comet and meteorite traditions of Aboriginal Australians." Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures (2014): 1-4.
Widgren, Mats. "Climate and causation in the Swedish Iron Age: learning from the present to understand the past." Geografisk Tidsskrift-Danish Journal of Geography 112:2 (2012): 126-134.
Dr. Tim Newfield, Princeton University, and Dr. Inga Labuhn, Lund University.
Carolingian mass grave, Entrains-sur-Nohain, INRAP.
Will climate change trigger widespread food shortages and result in huge excess mortality in our future? Many historians have argued that it has before. Anomalous weather, abrupt climate change, and extreme dearth often work together in articles and books on early medieval demography, economy and environment. Few historians of early medieval Europe would now doubt that severe winters, droughts and other weather extremes led to harvest failures and, through those failures, food shortages and mortality events.
Most remaining doubters adhere to the idea that food shortages had causes internal to medieval societies. Instead of extreme weather or abrupt climate change, they blame accidents of (population) growth, deficient agrarian technology, unequal socioeconomic relations and weak institutions. Yet only rarely they have stolen the show or dominated the scholarship. For example, Amartya Sen’s “entitlement approach” to subsistence crises, which assigns primary importance to internal processes, has made few inroads in the literature on early medieval dearth, although in later periods it has many adherents.
Of course, the idea that big events have a single cause – monocausality, in other words – rarely convinces historians for long. Famine theorists and historians of other eras and world regions now argue that neither external forces such as weather, nor internal forces such as entitlements, alone capture the complexity of food shortages. They propose that these two explanatory mechanisms, often labeled “exogenous” and “endogenous,” respectively, should not be considered independent of one another or mutually exclusive. To them, periods of dearth can be explained by environmental anomalies, like unusual and severe plant-damaging weather, that coincide with socioeconomic vulnerability and declining (for most people) entitlement to food.
These explanations are more convincing. It seems that diverse factors acted in concert to cause, prolong and worsen food shortages. But proof for complex explanations for dearth in the distant past is hard to come by. Though they can be misleading, simpler, linear explanations are much easier to pull out of the extant evidence. This is true even when the sources are plentiful, as they are, at least by early medieval standards, for some regions and decades of Carolingian Europe. Food shortages in the Carolingian period, especially those that occurred during the reign of Charlemagne, have attracted the attention of scholars since the 1960s.
Left: Bronze equestrian statuette of Charlemagne or possibly his grandson Charles the Bald (823-877). Discovered in Saint-Étienne de Metz and now in the Louvre. The figure is ninth century in date. The horse might be earlier and Byzantine. Charles the Bald ruled the western portion of the post-Verdun empire, although whether he was actually bald is still debated.
Right: A Carolingian denarius (812-814) depicting Charlemagne. The Charlemagne of the Charlemagne reliquary mask (Center) is handsomer. The coin, though, is contemporary and the bust is from the mid fourteenth century. Housed in the Aachener Dom’s treasury, it contains a skullcap thought to be that of the emperor.
For the Carolingian period, ordinances from the royal court, capitularies, reveal hoarding and speculation, and document official attempts to control the prices and movements of grain, while annalists and hagiographers recount severe winters and droughts. All of this evidence sheds light on dearth. Yet the legislative acts point to internal pressures on food supply, while the narrative sources highlight external ones. As we have seen, neither pressure adequately explains subsistence crises alone.
Unfortunately, however, we rarely have evidence for endogenous and exogenous factors at the same time. Around the year 800, when Leo III crowned Charlemagne imperator, most evidence for dearth comes from the capitularies. Before and after, narrative evidence dominates. So Charlemagne’s food shortages appear to have had internal drivers, and Charles the Bald’s external ones. Or so the written sources lead us to believe.
Carolingian Europe as of August 843 following the Treaty of Verdun. Under rex and imperator Charlemagne (742-814), Carolingian territory stretched to include the area of Europe outlined here.
Fortunately, evidence from other disciplines allows historians to fill in some of the gaps. External pressures are easier to establish by turning to the palaeoclimatic sciences. Using them, we are beginning to rewrite the history of continental European dearth, weather and climate from 750 to 950 CE. We are working on a new study that combines a near-exhaustive assessment of Carolingian written evidence for subsistence crises and weather with scientific evidence for changes in average temperature, precipitation, and volcanic activity (which can influence climate).
We are trying to answer some big questions, such as: What role did droughts, hard winters and extended periods of heavy rainfall have in sparking, prolonging or worsening Carolingian food shortages? Were these external forces the classic triggers of dearth that many early medievalists think they were?
Indicators of past climate embedded in trees and ice can test and corroborate observations of anomalous temperature and precipitation. For instance, the droughts of 794 and 874 CE, documented respectively in the Annales Mosellani and Annales Bertiniani, show up in the tree ring-based Old World Drought Atlas (OWDA, see below). Additionally, as McCormick, Dutton and Mayewski demonstrated, multiple severe Carolingian winters also align fairly neatly with atmosphere-clouding Northern Hemisphere volcanism reconstructed using the GISP2 Greenlandic ice core.
The Old World Drought Atlas (OWDA) for 794 and 874. Negative values indicate dry conditions, positive values indicate wet conditions (from Cook et al. 2015).
By marrying written and natural archives, we are able to perfect our appreciation of the scale and extent of the weather extremes that coincide with Carolingian periods of dearth. Yet instead of simply providing answers, our integrated data are raising questions, and pushing us towards a messier history of early medieval food shortage. This is because the independent lines of evidence often do not agree. For example, only two of the 15 driest years between 750 and 950 CE in the OWDA coincide with drought in Carolingian sources.
Admittedly, some of this dissonance may be artificial. The written record for weather and dearth is incomplete. To be sure, some places and times during the Carolingian era, broadly defined as it is here, are poorly documented. So reported drought years can appear kind of wet in the tree-based OWDA in some Carolingian regions (parts of northern Italy and Provence in 794 and 874 for instance).
Moreover, the detailed or “high-resolution” palaeoclimatology available now for early medieval Europe is much better for some regions than others. Tree-ring series extending back to 750 presently exist for few European regions. It is simply not possible to precisely pair some reported weather extremes or dearths to palaeoclimate reconstructions. Indeed, spatially the two lines of evidence can be mismatched. They can also be seasonally inconsistent, as the trees tell us far less about temperature and precipitation in the winter than they do for the summer.
Matches between historical and scientific evidence are therefore generally limited to the growing seasons, in places where written sources and palaeoclimate data overlap. That is enough to yield some surprising results. When the written record is densest, there is natural evidence for severe weather and rapid climate change, but not for food shortages.
Take the dramatic drop in average temperatures registered in European trees at the opening of the ninth century. According to the 2013 PAGES 2K Network European temperature reconstruction, temperatures were cooler around the time of Charlemagne’s coronation than they had been at any time between the mid sixth and early eleventh centuries. This dramatic cooling aligns well with a relatively small Northern Hemisphere volcanic eruption, detected in the recent ice-core record of volcanism led by Sigl. The eruption would have ejected sunlight-scattering sulfur aerosols into the atmosphere. Notably, larger events in the Carolingian era, like those of 750, 817 and 822, clearly had less of an influence on European temperature. The cold of 800 is equally pronounced but less unusual in a tree-based temperature reconstruction from the Alps. In this series, the late 820s are remarkably cooler.
Documentary sources register the falling temperatures. The Carolingian Annales regni francorum report severe growing-season frosts (aspera pruina) in 800. The Irish Annals of Ulster document a difficult and mortal winter in an entry quite possibly misdated in the Hennessy edition at 798 (799 or the 799/800 winter is more likely). Yet surprisingly, there is no contemporary record of food shortages in Europe.
Top: European Temperature Reconstruction, 0-2000 CE (data from Pages 2K Consortium, 2013).
Bottom: Middle Red: PAGES 2K 2013 Consortium European temperatures; Middle Burgundy: Büntgen et al 2011 Alpine temperature reconstruction; Top: Sigl et al 2015 ice-core record of Global Volcanic Forcing (GVF); Bottom: Written evidence for food shortages, both famines (F) and lesser shortages (LS). ‘W’ indicates no evidence for dearth but evidence for extreme weather. Between 750 and 950 we have identified 23 food shortages: 12 spatially and temporally circumscribed lesser shortages and 11 large multi-year famines.
Scholars tend to focus on instances when the written evidence for dearth and the natural evidence for anomalous weather align tidily. It seems that just as often, however, the two lines of evidence do not match so neatly. Severe weather may not always have triggered dearth in the early Middle Ages. Contemporary peoples could apparently cope with weather extremes in ways that allowed them to escape food shortages.
Early medieval vulnerability to external forces of dearth seems to have varied over space and time. We need to investigate the contrasting abilities of peoples from different early medieval regions and subperiods, participating in distinct agricultural economies with their own agrarian technologies, to withstand plant-damaging environmental extremes.
Several studies already suggest early medievals were capable of responding to gradual climate change. But to argue that they were not rigid or helpless when faced with marked seasonal temperature or precipitation anomalies, we must first identify, from sparse sources, potential moments of resilience. In this we run the risk of reading too much into absences of evidence. Yet the conclusion seems inescapable: when written sources are relatively abundant and there is no record of dearth during notable deviations in temperature and precipitation, early medievals must have adapted successfully.
Going forward, we must identify both moments and mechanisms of early medieval resilience in the face of climate change. Teasing these out from diverse sources might be tough going, but these elements are missing from the history of early medieval dearth and climate. Their omission has allowed for misleadingly neat histories of climate change and disaster in the period. Similar problems might well plague other histories that too clearly link climate changes to food shortages and mortality crises. Research that complicates these links could offer compelling new insights about our warmer future.
Authors' note: this is a short sampling of a much longer and more detailed multidisciplinary examination of Carolingian dearth, weather and climate, currently in preparation.
P. Bonnassie, “Consommation d’aliments immondes et cannibalisme de survie dans l’Occident du Haut Moyen Âge” Annales: Économies, Sociétés, Civilisations 44 (1989), pp. 1035-1056.
U. Büntgen et al, “2,500 Years of European Climate Variability and Human Susceptibility” Science 331 (2011), pp. 578-582.
U. Büntgen and W. Tegel, “European Tree-Ring Data and the Medieval Climate Anomaly” PAGES News 19 (2011), pp. 14-15.
F. Cheyette, “The Disappearance of the Ancient Landscape and the Climatic Anomaly of the Early Middle Ages: A Question to be Pursued” Early Medieval Europe 16 (2008), pp. 127-165.
E. Cook et al, “Old World Megadroughts and Pluvials during the Common Era” Science Advances 1 (2015), e1500561.
S. Devereux, Theories of Famine (Harvester Wheatsheaf, 1993).
R. Doehaerd, Le Haut Moyen Âge occidental: Economies et sociétés (Nouvelle Clio, 1971).
P.E. Dutton, “Charlemagne’s Mustache” and “Thunder and Hail over the Carolingian Countryside” in his Charlemagne’s Mustache and Other Cultural Clusters of a Dark Age (Palgrave, 2004), pp. 3-42, 169-188.
M. McCormick, P.E. Dutton and P. Mayewski, “Volcanoes and the Climate Forcing of Carolingian Europe, A.D. 750-950” Speculum 82 (2007), pp. 865-895.
T. Newfield, “The Contours, Frequency and Causation of Subsistence Crises in Carolingian Europe (750-950)” in P. Benito i Monclús ed., Crisis alimentarias en la edad media: Modelos, explicaciones y representaciones (Editorial Milenio, 2013), pp. 117-172.
PAGES 2k Network, “Continental-Scale Temperature Variability during the Past Two Millennia” Nature Geoscience 6 (2013), pp. 339-346.
K. Pearson, “Nutrition and the Early Medieval Diet” Speculum 72 (1997), pp. 1-32.
A. Sen, Poverty and Entitlements: An Essay on Entitlement and Deprivation (Oxford University Press, 1981).
M. Sigl et al, “Timing and Climate Forcing of Volcanic Eruptions for the Past 2,500 Years” Nature 523 (2015), pp. 543-549.
P. Slavin, “Climate and Famines: A Historical Reassessment” WIREs Climate Change 7 (2016), pp. 433-447.
A. Verhulst, “Karolingische Agrarpolitik: Das Capitulare de Villis und die hungersnöte von 792/793 und 805/806” Zeitschrift fur Agrargeschichte und Agrarsoziologie 13 (1965), pp. 175-189.
Dr. Bathsheba Demuth, Brown University.
The Greenlandic coast. Source: TheBrockenInaGlory, Wikimedia Commons, 2005, commons.wikimedia.org/wiki/File:Greenland_coast.JPG
In the year 1001 CE, Leif Erikson made landfall in Greenland, and traded with people who “in their purchases preferred red cloth; in exchange they had furs to give.” The Vikings called these people Skraelings. Present-day archeologists and historians call them the Thule. At its height, Thule civilization spread from its origins along the Bering Strait across the Canadian Arctic and into to Greenland. The ancestors of today’s Inuit and Inupiat, the Thule accomplished what Erikson and subsequent generations of Europeans never managed: living in the high Arctic without supplies of food, technology, and fuel from more temperate climates.
The Thule left archeological evidence of a technologically sophisticated, vigorous people. They invented the umiak, an open walrus-hide boat so large that it was sometimes equipped with a sail. These boats, when used alongside small, nimble kayaks, made the Thule formidable marine-mammal hunters. On land, they harnessed dogs to sleds and built homes half-underground, insulated by earth and beamed with whale bones.
People did inhabit the high North American Arctic before the Thule. Their immediate predecessors, called the Dorset by archeologists, were expert carvers, and there are signs of other cultures that date back at least five thousand years. But the Thule appear to have been a particularly robust society, one that inhabited thousands of challenging Arctic miles. Eventually, they even traded with Europeans for metal tools, sending walrus ivory as far abroad as Venice.
Thule migration routes from the Bering Strait east. Map credit: anthropology.uwaterloo.ca/ArcticArchStuff
In the twentieth century, many archeologists linked the success of the Thule to the climate. In this view, rapid Thule expansion coincided with the Medieval Warm Period in the years between 1000 and 1300. The Thule were expert whalers, especially of bowhead whales. This slow species makes for good prey. Their 100-ton bodies can be fifty percent fat by volume, giving people ample calories to eat and burn through long winters. With the slight increase in temperature during the Medieval Warm Period, the theory went, the range of the bowhead whale expanded across newly ice-free waters. Atlantic and Pacific bowhead populations eventually met in the Arctic Ocean north of Canada, offering an uninterrupted banquet of blubber to hunters.
The Thule, in this view, were simply whale hunters who followed the migration of their prey in a warming climate. Environmental conditions, not a sophisticated culture, was the key explanation for their success. Emphasizing climate as the cause of migration and social success reduced the achievements of the Thule, essentially, to those of their prey.
However, twenty-first century evidence is changing this account of Thule migration. In 2000, Robert McGhee questioned the validity of the radiocarbon dates that helped establish Thule expansion as an eleventh-century phenomenon. He proposed the 1200s as the earliest date of migration. Then, genetic tests by marine biologists showed that Atlantic and Pacific bowhead whales did not mix their populations during the Medieval Warm Period, meaning that there was a substantial gap in whaling possibilities on the Arctic coast.
Something more complicated than just following the blubber drove the Thule eastward. McGhee speculated that communities moved for iron, which is short supply in the Arctic. Thule hunters learned from the Dorset people of a deposit left by the Cape York meteorite. They colonized huge territories to secure their access to this precious resource from outer space. Other specialists theorized that population pressure, overhunting, or warfare led the Thule to migrate east.
Thule archeological site, with whalebone beams among flooring stones. Photo credit: anthropology.uwaterloo.ca/ArcticArchStuf
The ongoing work of Canadian archeologists T. Max Friesen and Charles D. Arnold seems to confirm that we must look beyond simple climatic explanations for the Thule expansion. Working on Beaufort Sea and Amundsen Gulf sites, the pair established that there was no definitive Thule occupation in this part of the western Arctic prior to the thirteenth century. Because any Thule migrants would have had to pass through these points as they moved east, their research indicates that the Thule civilization was only beginning its continental spread around the year 1200, well into the period of warming. The climate may have helped the Thule quickly spread toward Greenland, but the onset of the Medieval Warm Period did not automatically draw people eastward.
Moreover, the work of other archeologists on the Melville Peninsula, along Baffin Bay, indicates that the Mediaeval Warm Period was not always so warm. Some areas of the Arctic saw slight temperature increases, but in general the millennium was cooler than those past. In places, the effects of the so-called Little Ice Age began a century or two before they were evident across the globe, meaning the Thule adapted not to a warmer Arctic, but a colder one. This cooling was more apparent in the west, where the team found fewer Thule sites but also more stability, both in the climate and the record of human occupation. To the east of the Melville Peninsula, where temperatures did warm, the climate was also more variable – adding a new set of complexities to social and economic life. The move into the central Arctic, therefore, reflected forces other than climate.
Beginning in the fifteenth century, Thule culture fragmented, specialized, and emerged eventually as distinct contemporary Inuit and Inupiat groups. The Little Ice Age is often the reason given for the disintegration of Thule civilization in the fifteenth century. Yet, the work by Finkelstein, Ross, and Adams indicates that, while the Thule abandoned some sites due to cooling trends, this did not hold in all cases. Other causes, including increased contact with Europeans and their infectious diseases, might have had more to do with the disintegration in some locations.
Overall, the new vision of the Thule prominence in the Arctic makes their rise shorter, but even more impressive. And if the Thule began their migration only in 1200, it seems unlikely they spread east simply to find iron. This would have required only smaller-scale movements to precise locations. Instead, the Thule developed a thriving, intricate network of settlements across the Arctic. For Friesen and Arnold, this is evidence that the Thule expanded in order to recreate the ideological and economic lives that they had enjoyed in their origins along the Bering Strait. And in just a century they did, not only by inhabiting land from the Bering Strait to Greenland, but through explorations to the northern edges of the continent.
All of this also helps us reinterpret a well-known tale from the Viking exploration of the Arctic. When Leif Erikson’s sister Freydis frightened off a band of Skraelingar in the early eleventh century by striking “her breast with the naked sword” of a fallen Viking, she was likely not fighting the Thule, as scholars have assumed. Perhaps it was the Dorset people that “were frightened, and rushed off in their boats.” The Thule, at least, were likely still a century away from the eastern Canadian coastline. They were not easily daunted either by a shifting climate or by Viking weapons.
Quotes from the Saga of Erik the Red, English translation by J. Sephton, can be found here: http://www.sagadb.org/eiriks_saga_rauda.en
Friesen, T. Max and Charles D. Arnold. “The Timing of the Thule Migration: New Dates from the Western Canadian Arctic,” American Antiquity 73 (2008): 527-538.
Finkelstein, S.A., J.M Ross, and J.K Adams. “Spatiotemporal Variability in Arctic Climates of the Past Millennium: Implications for the Study of Thule Culture on Melville Peninsula, Nunavut,” Arctic Antarctic, and Apline Research 41 (200): 442-454.
McGhee, Robert. “Radio Carbon Dating and the Timing of the Thule Migration,” in Appelit, M. Berglund, J, and Gullov, H.C. eds. Identities and Cultural Contacts in The Arctic: Proceedings from a Conference at the Danish National Museum. Copenhagen (2000): 181-191.
Morrison, David. “The Earliest Thule Migration.” Canadian Journal of Archaeology 22( 1999): 139-156.
Betts, Matthew, and T. Max Friesen, “Quantifying Hunter-Gatherer Intensification: A Zooarchaeological Case Study form Arctic Canada,” Journal of Anthropological Archaeology 23 (2004): 357-384.
Dyke, Arthur S., James Hooper, and James M. Savelle. “A History of Sea Ice in the Canadian Arctic Archipelago based on Postglacial Remains of the Bowhead Whale (Balaena mysticetus)”, Arctic 49 (1996): 235-255.
Park, Robert W. “The Dorset-Thule Succession Revisited,” in Appelit, M. Berglund, J, and Gullov, H.C. eds. Identities and Cultural Contacts in the Arctic: Proceedings from a Conference at the Danish National Museum. Copenhagen (2000): 192-205.
Dr. Gabriel Henderson, Aarhus University
Counting in everyday life is a relatively straightforward affair; one, two, three, and on and on. Less simple is the process of reliably counting the number of sunspots on the surface of the sun. Sunspots are darkened areas on the solar surface. In Europe, people knew of their existence at least since the early 17th century, and some of the larger sunspots were probably noted long before Galileo. Elsewhere, sunspot counts were maintained for much longer. Counting these darkened areas is one of the most effective ways to establish a record of the evolution in solar behavior. Not only do sunspot observations provide crucial information about changes in the sun’s magnetic field, they strongly correlate with long-term fluctuations in the amount of energy released by the sun – the so-called solar cycle.
Yet, in the 1970s, counting sunspots signified something much more dramatic and nefarious about the history of science itself. In these years, John “Jack” Eddy, an astrophysicist with the National Center for Atmospheric Research, began to scour old, dusty books in library basements to resuscitate a long-forgotten event in the history of solar behavior, behavior that seemed completely at odds with the prevailing orthodox understanding of the sun. Despite what appeared to be the historic and predictable vacillation in the number of sunspots every eleven years, a regularity known to exist since the mid-19th century, Eddy noticed in his records what appeared to be the virtual absence of sunspots between 1645 and 1715.
This curious blemish in the solar record was no small discovery. “If it really happened,” Eddy noted in one of his earliest talks on the matter, “we should recognize it as perhaps the most drastic thing that has ever happened to the sun since we began observing it and start including it in our work on the solar cycle." (Eddy, 1974) The implication was obvious: if the sun acted regularly and predictably every eleven years or so, how does one explain the disappearance of sunspots for almost a century?
The lack of sunspots was not Eddy’s discovery, at least not in the purest sense. What he called the Maunder Minimum had been observed almost a century earlier by British astronomer Edward Walter Maunder, who began to publish his findings during the 1890s. To Eddy’s consternation, however, Maunder’s discovery appeared to have been forgotten by the astrophysics community. How could this be? Scientific observations and facts don’t just disappear, do they? To Eddy, the answer was yes. A cursory glance at the matter yielded at least one possible reason why: Maunder was not vocal enough about his discovery. But further research yielded a much richer narrative, one that compelled Eddy to examine the deeply held assumptions of his own profession.
Eddy’s investigation, as it turned out, showed that Maunder was not forgotten merely because of his inability to properly disseminate his finding about sunspots, but rather because the astrophysical community had – for almost a century – allowed their preexisting assumptions to blind them to new ideas. A conspiracy had taken place, Eddy argued, one based in what appeared to be a universal belief that the sun acted regularly and predictably according to the solar cycle – what he called the principle of solar uniformitarianism.
The strength of Maunder’s observations was insufficient to break the universally-accepted canon of solar regularity. Instead of acknowledging and understanding an anomaly in solar behavior, “solar physicists have largely continued to ignore or forget the anomaly, if real,” Eddy insisted in the spring of 1976. “Some have institutionalized the solar cycle and made a profession of extending it into the past and predicting in the future; ignoring, doubting, or intentionally diluting the claims of Maunder of this skeleton in the closet of solar physics." (Eddy, 1976)
This was a dramatic claim, but one that became inextricably interwoven with Eddy’s public admonishment – if not condemnation – of professional orthodoxy within science itself. Eddy wrote about the topic, gave interviews, and addressed scientific and popular audiences – all in the hope that his tempest of activity would lead to Maunder’s long-overdue recognition. But perhaps more poignantly, Eddy portrayed himself as the detective who pulled back the curtain to reveal the biases and prejudices that prevented what he considered to be genuine scientific progress. For him, contemporary astrophysics was a stale and unstable artifice, and only through the work of pioneers like himself – and the forgotten Maunder – could one dispel the fashionable tropes that dictated popular understanding of scientific progress. As he described to an audience within the Boston Museum of Science in May 1978, “In fact, much of what we know, or think we know is not that way at all. And if we have the heart and stomach to look down at it closely, is based upon a shaky and often overextended framework of assumptions – cantilevered scaffolds of bamboo poles and weathered twine.” (Eddy, 1978)
This is an important story in part because it helps to explain why Eddy spoke about sunspots with what historian Kark Hufbauer referred to as “a missionary’s zeal.” (Hufbauer, 1991) But what else does the story show? It certainly does not mean that Eddy’s pioneering work led to a wholesale abandonment of the idea that the sun (for the most part) behaves in a regular, cyclical fashion. That interpretation would be too extreme. However, it would not be too extreme to argue that he used what he considered a crime against Maunder to justify his own predilections as a scientist. Throughout his professional life, he harbored a deep skepticism toward what he saw as scientists’ proclivity for unoriginality and challenged others’ apparent unwillingness to probe the very depths of their own professional, and sometimes erroneous, assumptions. Eddy was comfortable opening the closet.
Eddy, John, "The Long Solar Winter," 1974 December 5, Box 2, John Eddy Papers, National Center for Atmospheric Research.
Eddy, John, ”Maunder Minimum,” 15 April 1976, Box 3, JEP
Eddy, John, ”The Changing Sun,” 28 May 1978, Box 3, JEP
Hufbauer, Karl. Exploring the Sun: Solar Science Since Galileo. Baltimore: Johns Hopkins University Press, 1991.
It's Maunder Minimum Month at HistoricalClimatology.com. This is our first of two feature articles on the Maunder Minimum. The second, by Gabriel Henderson of Aarhus University, will examine how astronomer John Eddy developed and defended the concept.
Although it may seem like the sun is one of the few constants in Earth’s climate system, it is not. Our star undergoes both an 11-year cycle of waning and waxing activity, and a much longer seesaw in which “grand solar minima” give way to “grand solar maxima.” During the minima, which set in approximately once per century, solar radiation declines, sunspots vanish, and solar flares are rare. During the maxima, by contrast, the sun crackles with energy, and sunspots riddle its surface.
The most famous grand solar minimum of all is undoubtedly the Maunder Minimum, which endured from approximately 1645 until 1720. It was named after Edward Maunder, a nineteenth-century astronomer who painstakingly reconstructed European sunspot observations. The Maunder Minimum has become synonymous with the Little Ice Age, a period of climatic cooling that, according to some definitions, endured from around 1300 to 1850, but reached its chilliest point in the seventeenth century.
During the Maunder Minimum, temperatures across the Northern Hemisphere declined, relative to twentieth-century averages, by about one degree Celsius. That may not sound like much – especially in a year that is, globally, still more than one degree Celsius hotter than those same averages – but consider: seventeenth-century cooling was sufficient to contribute to a global crisis that destabilized one society after another. As growing seasons shortened, food shortages spread, economies unraveled, and rebellions and revolutions were quick to follow. Cooling was not always the primary cause for contemporary disasters, but it often played an important role in exacerbating them.
Many people – scholars and journalists included – have therefore assumed that any fall in solar activity must lead to chillier temperatures. When solar modelling recently predicted that a grand solar minimum would set in soon, some took it as evidence of an impending reversal of global warming. I even received an email from a heating appliance company that encouraged me to hawk their products on this website, so our readers could prepare for the cooler climate to come! Of course, the warming influence of anthropogenic greenhouse gases will overwhelm any cooling brought about by declining solar activity.
In fact, scientists still dispute the extent to which grand solar minima or maxima actually triggered past climate changes. What seems certain is that especially warm and cool periods in the past overlapped with more than just variations in solar activity. Granted, many of the coldest decades of the Little Ice Age coincided with periods of reduced solar activity: the Spörer Minimum, from around 1450 to 1530; the Maunder Minimum, from 1645 to 1720; and the Dalton Minimum, from 1790 to 1820. However, one of the chilliest periods of all – the Grindelwald Fluctuation, from 1560 to 1630 – actually unfolded during a modest rise in solar activity. Volcanic eruptions, it seems, also played an important role in bringing about cooler decades, as did the natural internal variability of the climate system. Both the absence of eruptions and a grand solar maximum likely set the stage for the Medieval Warm Period, which is now more commonly called the Medieval Climate Anomaly.
This gets to the heart of what we actually mean when we use a term like “Maunder Minimum” to refer to a period in Earth’s climate history. Are we talking about a period of low solar activity? Or are we referring to an especially cold climatic regime? Or are we talking about chilly temperatures and the changes in atmospheric circulation that cooling set in motion? In other words: what do we really mean when we say that the Maunder Minimum endured from 1645 to 1720? How does our choice of dates affect our understanding of relationships between climate change and human history in this period?
To find an answer to these questions, we can start by considering the North Sea region. This area has yielded some of the best documentary sources for climate reconstructions. They allow environmental historians like me to dig into exactly the kinds of weather that grew more common with the onset of the Maunder Minimum. In Dutch documentary evidence, for example, we see a noticeable cooling trend in average seasonal temperatures that begins around 1645. On the surface of things, it seems like declining solar activity and climate change are very strongly correlated.
And yet, other weather patterns seem to change later, one or two decades after the onset of regional cooling. Weather variability from year to year, for example, becomes much more pronounced after around 1660, and that erraticism is often associated with the Maunder Minimum. Severe storms were more frequent only by the 1650s or perhaps the 1660s, and again, such storms are also linked to the Maunder Minimum climate. In the autumn, winter, and spring, easterly winds – a consequence, perhaps, of a switch in the setting of the North Atlantic Oscillation – increased at the expense of westerly winds in the 1660s, not twenty years earlier.
A depiction of William III boarding his flagship prior to the Glorious Revolution of 1688. Persistent easterly, "Protestant" winds brought William's fleet quickly across the Channel, and thereby made possible the Dutch invasion of England. For more, read my forthcoming book, "The Frigid Golden Age." Source: Ludolf Bakhuizen, "Het oorlogsschip 'Brielle' op de Maas voor Rotterdam," 1688.
All of these weather conditions mattered profoundly for the inhabitants of England and the Dutch Republic: maritime societies that depended on waterborne transportation. Rising weather variability made it harder for farmers to adapt to changing climates, but often made it more profitable for Dutch merchants to trade grain. More frequent storms sank all manner of vessels but sometimes quickened journeys, too. Easterly winds gave advantages to Dutch fleets sailing into battle from the Dutch coast, but westerly winds benefitted English armadas. If we define the Maunder Minimum as a climatic regime, not (just) a period of reduced sunspots, and if we care about its human consequences, what should we conclude? Did the Maunder Minimum reach the North Sea region in 1645, or 1660?
These problems grow deeper when we turn to the rest of the world. Across much of North America, temperature fluctuations in the seventeenth century did not closely mirror those in Europe. There was considerable diversity from one North American region to another. Tree ring data suggests that northern Canada appears to have experienced the cooling of the Maunder Minimum. Western North America also seems to have been relatively chilly in the seventeenth century, although there chillier temperatures probably did not set in during the 1640s.
By contrast, cooling was moderate or even non-existent across the northeastern United States. Chesapeake Bay, for instance, was warm for most of the seventeenth century, and only cooled in the eighteenth century. Glaciers advanced in the Canadian Rockies not in the seventeenth century, but rather during the early eighteenth century. Their expansion was likely caused by an increase in regional precipitation, not a decrease in average temperatures.
Still, the seventeenth century was overall chillier in North America than the preceding or subsequent centuries, and landmark cold seasons affected both shores of the Atlantic. The consequences of such frigid weather could be devastating. The first settlers to Jamestown, Virginia had the misfortune of arriving during some of the chilliest and driest weather of the Little Ice Age in that region. Crop failures contributed to the dreadful mortality rates endured by the colonists, and to the brief abandonment of their settlement in 1610.
Moreover, many parts of North America do seem to have warmed in the wake of the Maunder Minimum, in the eighteenth century. This too could have profound consequences. In the seventeenth century, settlers to New France had been surprised to discover that their new colony was far colder than Europe at similar latitudes. They concluded that its heavy forest cover was to blame, and with good reason: forests do create cooler, cloudier microclimates. Just as the deforestation of New France started transforming, on a huge scale, the landscape of present-day Quebec, the Maunder Minimum ended. Settlers in New France concluded that they had civilized the climate of their colony, and they used this as part of their attempts to justify their dispossession of indigenous communities.
Despite eighteenth-century warming in parts of North America, the dates we assign to the Maunder Minimum do look increasingly problematic when we look beyond Europe. If we turn to China, we encounter a similar story. Much of China was actually bitterly cold in the 1630s and early 1640s, before the onset of the Maunder Minimum elsewhere. This, too, had important consequences for Chinese history. Cold weather and precipitation extremes ruined crops on a vast scale, contributing to crushing famines that caused particular distress in overpopulated regions. The ruling Ming Dynasty seemed to have lost the “mandate of heaven,” the divine sanction that, according to Confucian doctrine, kept the weather in check. Deeply corrupt, riven by factional politics, undermined by an obsolete examination system for aspiring bureaucrats, and scornful of martial culture, the regime could adequately address neither widespread starvation, nor the banditry it encouraged.
Climatic cooling caused even more severe deprivations in neighboring, militaristic Manchuria. There, the solution was clear: to invade China and plunder its wealth. The first Manchurian raid broke through the Great Wall in 1629, a warm year in other parts of the Northern Hemisphere. Ultimately, the Manchus capitalized on the struggle between Ming and bandit armies by seizing China and founding the Qing (or "Pure") Dynasty in 1644.
China under the Ming Dynasty was arguably the most powerful empire of its time. Even as it unravelled in the early seventeenth century, its cultural achievements were impressive, as this painting of fog makes clear. Source: Anonymous, "Peach Festival of the Queen Mother of the West," early 17th century.
This entire history of cooling and crisis predates the accepted starting date of the Maunder Minimum. Yet, the fall of the Ming Dynasty unfolded in one relatively small part of present-day China. Average temperatures in that region reached their lowest point in the 1640s. By contrast, average temperatures in the Northeast warmed by the middle of the seventeenth century. Average temperatures in the Northwest also warmed slightly during the mid-seventeenth century, and then cooled during the late Maunder Minimum.
Smoothed graphs that show fluctuations in average temperature across centuries or millennia give the impression that dating decade-scale warm or cold climatic regimes is an easy matter. Actually, attempts to precisely date the beginning and end of just about any recent climatic regime are sure to set off controversy. This is not only because global climate changes have different manifestations from region to region, but also because climate changes, as we have seen, involve much more than shifts in average annual temperature. Did the Maunder Minimum reach northern Europe, for instance, when average annual temperatures declined, when storminess increased, when annual precipitation rose or fell, or when weather became less predictable?
Historians such as Wolfgang Behringer have argued that, when dating climatic regimes, we should also consider the “subjective factor” of human reactions to weather. For historians, it makes little sense to date historical periods according to wholly natural developments that had little impact on human beings. Maybe historians of the Maunder Minimum should consider not when temperatures started declining, but rather when that decline was, for the first time, deep enough to trigger weather that profoundly altered human lives. When we consider climate changes in this way, we may be more inclined to subjectively date climatic regimes using extreme events, such as especially cold years, or particularly catastrophic storms. Dating climate changes with an eye to human consequences does take historians away from the statistical methods and conclusions pioneered by scientists, but it also draws them closer to the subjects of historical research.
In my work, I do my best to combine all of these definitions, and incorporate many of these complexities. I date climatic regimes by considering their cause – solar, volcanic, or perhaps human – and by working with statisticians who can tell me when a trend becomes significant. However, I also try to consider the many different kinds of weather associated with a climatic shift, and the consequences that extremes in such weather could have for human beings.
As you might expect, this is not always easy. I have long held that the Maunder Minimum, in the North Sea region, began around 1660. Increasingly, I find it easier to begin with the broadly accepted date of 1645, but distinguish between different phases of the Maunder Minimum. An earlier phase marked by cooling might have started in 1645, but a later phase marked by much more than cooling took hold around 1660.
These are messy issues that yield messy answers. Yet we must think deeply about these problems. Not only can such thinking affect how we make sense of the deep past, but it can also provide new perspectives on modern climate change. When did our current climate of anthropogenic warming really start? At what point did it start influencing human history, and where? What can that tell us about our future? These questions can yield insights on everything from the contribution of climate change to present-day conflicts, to the timing of our transition to a thoroughly unprecedented global climate, to the urgency of mitigating greenhouse gas emissions.
Behringer, Wolfgang. A Cultural History of Climate. Cambridge: Polity Press, 2010.
Brooke, John. Climate Change and the Course of Global History: A Rough Journey. Cambridge: Cambridge University Press, 2014.
Coates, Colin and Dagomar Degroot, “‘Les bois engendrent les frimas et les gelées:’ comprendre le climat en Nouvelle-France." Revue d'histoire de l'Amérique française 68:3-4 (2015): 197-219.
Dagomar Degroot, “‘Never such weather known in these seas:’ Climatic Fluctuations and the Anglo-Dutch Wars of the Seventeenth Century, 1652–1674.” Environment and History 20.2 (May 2014): 239-273.
Eddy, John A. “The Maunder Minimum.” Science 192:4245 (1976): 1189-1202.
Parker, Geoffrey. Global Crisis: War, Climate Change and Catastrophe in the Seventeenth Century. London: Yale University Press, 2013.
White, Sam. “Unpuzzling American Climate: New World Experience and the Foundations of a New Science.” Isis 106:3 (2015): 544-566.
Dr. Alan MacEachern, University of Western Ontario.
Western's Archives and Research Collections Centre (ARCC) storage room. Photo by Gabrielle Bossy.
In 2008, I had a meeting at the Environment Canada headquarters in Downsview, Ontario, and afterward staff gave me a tour. Since I’m a historian, they showed me the old stuff. Down in the basement – not quite the warehouse scene at the end of Raiders of the Lost Ark, but close enough – they led me along row after row of weather observations: all of the original paper forms and registers that since 1840 had been filled out by what would eventually be thousands of observers at thousands of weather stations across Canada. Environment Canada had long ago squeezed the quantitative data they wanted from the observations, and from it created an online National Climate Data and Information Archive. That may have actually put the physical collection more at risk; a teary librarian told of worrying she would return from vacation someday and find it had been thrown out. Staff were maintaining the collection as best they could, but they knew the facility was not up to archival standards – a massive steam pipe loomed menacingly nearby – and they were concerned about the lack of a long-term plan for it. The collection should rightly have gone to Library and Archives Canada (LAC), but in earlier decades the archives had expressed no interest in it and more recently had experienced an acquisitions freeze.
So without any real plan, let alone authorization, I offered to take the collection off Environment Canada’s hands.
Environment Canada weather stations, 1840-1960. Visualization by Josh MacFadyen, Arizona State University.
At the time, I was a dyed-in-the-wool environmental historian increasingly feeling that I had somehow neglected the most pressing environmental issue of our time, climate change. Helping protect a nationally-significant climate history collection seemed like good karma.
I went straight from Environment Canada to my university archives. Thankfully, a few years earlier the archives had moved into a new building containing a high density module capable of holding one million volumes. Thankfully, too, University Archivist Robin Keirstead was excited by the idea of having the collection come to Western University, so it could be better preserved, more accessible to researchers, and made available for teaching purposes. Robin and I formally contacted Environment Canada and LAC, expressing Western’s interest in receiving the collection.
It took years of negotiation, but what ultimately made the transfer happen was that some folks at Environment Canada thought these old records were priceless and others thought they were worthless, so both concluded it would be great if they were at Western.
In 2014, the collection arrived at Western on long-term loan – here is a full listing of it. There are several hundred volumes of correspondence, letterbooks, and journals related to Canadian meteorological and climatological history between 1828 and 1967. But the real jewels of the collection are the almost 900 archival boxes (an estimated 1.6 million pages) containing all of Environment Canada’s extant daily weather observations between 1840 and 1960. From what we could determine, this was the largest archival arrangement ever made between a Canadian university and the federal government.
Mission accomplished. …But now what?
“Super salubrious.” Howard D. Sloat, Jarvis, Ontario, August 1954, EC151, Environment Canada collection.
This was already a good news story as far as I was concerned, because the Environment Canada collection will be protected at archival standards indefinitely (presumably, until LAC is in a position to take it). But now that it was at my university, I wanted to see it used. I advertised its availability to researchers across Canada. I developed a climate history course that utilized it. And I considered what contributions neophyte climate history researchers – like my students, like me – could make with it.
To begin, we are focusing on the qualitative remarks that observers included alongside their quantitative data. Although Environment Canada long encouraged (or, in some eras, tolerated) observers’ remarks on such matters as extreme weather, farming conditions, and changing seasons, it had never figured out a way to utilize these remarks, including in its climate archive. This qualitative data remained untapped.
Students and I are working to change that. In the past year, we have begun creating a database of remarks from the collection. We are transcribing everything the observers thought worth observing (with the important exception that we are ignoring the hundreds of thousands of entries such as “Clear,” “Fair,” or “Rain”). There are many entries on crop conditions and the status of harvests, on smoke from forest fires, on Northern lights, on matters of local political or social interest. There are also many entries that offer insights into the history of the meteorological service itself.
“Hard maple in blossom. Oriole return. Swallows return. English Cherry blossom. Canaries return. Ice 3/16 inches ground. Orchards in blossom. Forest well leaved out. Fire flies seen. Crops all look well except corn it is yellow with the cold and wet.” Malcolm McDonald, Lucknow, Ontario, May 1902, EC172, Environment Canada collection.
But of special interest – both to the observers and to us – is phenological information. Phenology is the study of cyclical natural phenomena, and weather observers documented, often over the course of decades, the dates of ice break-up and freeze-up on rivers and lakes, when the first of various bird species appeared, when wildflowers bloomed, when spring peepers emerged. The observers were especially vigilant during what might be called the “phenological moment” of the late 19th and early 20th century, when Canadian individuals and learned societies became intent in gathering such information as a means of gaining biological and meteorological knowledge about their nation. With historians and climate scientists today seeking to verify older meteorological observations and to understand other ways of knowing climate, these observations assume new significance.
The database that Western History students and I are creating already has tens of thousands of tagged entries. In the near future, we will shift to the creation of a website that allows for geographical, temporal, and thematic searching of these observations, at micro- to macro- scales. Interested in Ajax, Ontario or in all of Canada? In your birthday or in a fifty-year timespan? In reports on earthquakes, orioles, or lilacs, or on all extreme weather, all fauna, all flora? We certainly hope to use this for research purposes, but our project’s ultimate goal is to make these observations available to climate researchers, and to the public, so that they make findings of their own. More good karma – climate research requires it.
Contact me at firstname.lastname@example.org if you have questions about the Environment Canada collection or research access to it.
Dr. Tim Newfield, Princeton University.
The sun, dimmed by fog. Source: 0xefbeadde.wordpress.com
The June 1991 Pinatubo eruption in the Philippines was one of the largest volcanic eruptions of the twentieth century. It is well documented. There are living witnesses, newspaper articles, detailed surveys of the mountain before and after it blew its top, and satellite maps of the ejecta. The eruption was photographed from the ground and the air, and today you can even YouTube it.
Pinatubo released up to 20 megatons of sulphur dioxide as many as 35 kilometers into the sky. It turned into fine sulphuric acid aerosol, and, within weeks, enveloped much of the Earth. The aerosols were suspended in the atmosphere for around two years. While there, they "veiled" the sun by absorbing or "backscattering" solar radiation. That heated the stratosphere but cooled Earth's surface. The volcano caused a sudden (but non-uniform) fall in average global temperatures of at least .5 degrees Celsius that was still in effect as late as late 1992. In the Northern Hemisphere, temperatures in summer 1992 fell by about 2 degrees Celsius.
Pinatubo on 12 June 1991, a few days before the big eruption. The mountain shrunk 300 meters in the 15 June explosion. Lonely Planet’s Philippines describes a hike up the mountain as ‘most accessible’, but warns not to attempt it in ‘dodgy weather’. Source: USGS.
Earlier and much larger volcanic eruptions in the late Holocene are more obscure. Take Tambora, which erupted in April 1815 and pumped 60 to 110 megatons of sulphur dioxide into the air, leading to one of the most infamous ‘Years Without a Summer’. No volcanologist or climate scientist doubts it dwarfed Pinatubo, but far less is known about the earlier eruption. There are fewer firsthand accounts, and no films or photos (though some argue Joseph Turner and other artists captured its far-reaching atmospheric effects). While the available instrumental data are useful, they are limited and local. Nevertheless, scientists have determined that it was one of the biggest volcanic episodes of the last several thousand years. A recent study estimated that it resulted, in some regions, in 2 to 4 degrees Celsius of cooling from June to August, 1816.
There were other large events, deeper in the historical past. Yet these episodes are far more mysterious. Often the culpable volcano (or volcanoes) is not known, and firsthand accounts (if any) are more than vague: they are cryptic.
For example, something traumatic appears to have affected the world in around the year 536 CE. The five reports that survive for this "536 event" say nothing of an eruption. They merely describe in vague terms a sort of unusual sun dimming or atmospheric veiling. The Roman statesman Cassiodorus, for example, describes a dim moon, and a sun that lost its "wonted light" and appeared "bluish," as if in "transitory eclipse throughout the whole year."
These reports leave room to doubt that the phenomenon they describe was really volcanic in origin. Their mysteriousness, however, has spurred intense interest from scholars and enthusiasts since the phenomenon first appeared in the pages of the Journal of Geophysical Research, in 1983. NASA geoscientists Richard Stothers and Michael Rampino discovered a stratosphere-clouding volcanic episode tucked away in four (but, by 1988, five) late antique texts. They also found it in sulphate in Greenlandic ice, and they discovered pumice-lodged wood they date to 540 ±90 CE (meaning give or take 90 years), on Rabaul, a volcano in Papua New Guinea.
Since 1983, much has changed. Rabaul is long gone. Even before it seemed the dust veil witnessed (inconsistently) over the Mediterranean was not a volcanic dust veil, but instead some sort of "damp fog," the mountain was considered an unlikely source. In the 1980s, assessments of Antarctic ice did not turn up major mid sixth-century volcanism, but rather a signal from about 505 CE. That exonerated all Southern Hemispheric volcanoes from causing the 536 event. Rabual’s eruption chronology was re-dated with greater precision at least twice within eleven years, and it was determined that the 540±90 date was, in fact, an uncalibrated mix-up of the ages originally returned for the pumiceous wood. Rabaul actually exploded sometime in the interval of 633-670 CE, or (as of 2015) 667-699 CE.
Other volcanoes got their share of attention too. Before Rabaul, the Greenlandic sulphates were associated with the great ‘White River Ash’ eruption of Alaska’s Mount Churchill, which was dated roughly in 1975 to 700 ±100 CE, but in 2014 to 833-850 CE. They were also loosely associated with Iceland’s Eldgjá, which is well-known for erupting in the 930s. After, they were tied to the Chiapanecan El Chichón, Indonesia’s infamous Krakatoa, the now-dormant stratovolcano Haruna, and the El Savadorian Ilopango. The latter received considerable press in 2010, when palaeoecologist Robert Dull asserted its ‘paroxysmal’ Tierra Blanca Joven event, considered the largest Central American eruption of the last 84,000 years, and previously given third- and fifth- century dates, actually caused global cooling in 536 CE.
El Salvador’s largest and deepest (crater) lake, Lago de Ilopango. A survey of archaeological excavations suggested a 100-kilometre radius around the site was little- or un-inhabitable for a century after the eruption (whether that happened in 535/536 CE or not). A recent edition of Frommer’s Central America describes a visit to the (always) warm-water caldera as "overrated." Lago de Coatepeque, another volcanic lake 50 kilometres to west, is preferable. Source: NASA Earth Observatory
Yet for a while after 1983, scientists could find no eruptions in 536 CE. The original ice dates of 540 ±10 and c. 535 CE that Stothers and Rampino used to explain the abnormal Byzantine veiling were adjusted in 1984, at around the same time that Stother’s second, more influential article on a volcanic 536 event appeared in Science. This does not now seem surprising. The dates that scientists have given for most first-millennium eruptions have shifted back or forward in time at some point or another. Analyses of the remnants of eruptions in eruption-site sediments often produce ages that disagree by a half century or more. Studies of sulphate layers in ice cores also vary: a couple years in some cases, a decade or five in others.
For more than a decade after 1983, it seemed that the 536 event had other causes. Explanations were diverse. Some held that the clouding Procopius and his peers had witnessed was tropospheric and regional, not a stratospheric phenomenon of hemispheric or global proportions. Volcanism that was local and remarkable, but globally inconsequential, was the cause of some kind of low-hanging ‘damp fog’.
Others held firm: volcano or no volcano, the event was global. Oceanic outgassing, an interstellar cloud, and an asteroid or comet impact event were proposed. The latter, advanced in the early ‘90s, was not immediately popular. Some scholars considered an impactor a "much less likely" explanation for the 536 event than a major volcanic eruption, despite the then-complete lack of evidence for such an eruption. Yet the impact theory eventually gained some credibility. Different types of rocks and impacts were envisioned. A comet might have "air-bursted"’ in the upper atmosphere and ignited one or more vast forest fires, or alternatively a "medium-sized asteroid" struck an ocean and threw marine aerosols into the stratosphere. The impact of a comet less than one kilometer in diameter could have loaded the sky with enough debris to generate multiple successive years of cooling. Even after volcanic eruptions could again be convincingly tied to 536 CE cooling, some scientists argued that an asteroid 640 metres in diameter crashed into Australia, compounding the chilling effect of volcanic eruptions and carving out the Gulf of Carpentaria.
The impactor theory failed to convince many for long. Michael Baillie, a tree ring expert (or "Dendrochronologist") who first advocated the theory in a 1994 article, sided with volcanic explanations after glaciologist Lars Larsen and his team found evidence for a major eruption in multiple ice cores at both poles. This big, low-latitude, Tropical event was affixed a date of 533/534 ±2 CE. It seemed to explain why the "sun’s rays," according to John of Ephesus, "were visible for only two or three hours a day" in 536/37 CE. Larsen also drew attention to "an even larger" Northern Hemisphere deposit, which he dated to 529 ±2 CE. This may not have seemed important at the time, since there are no written sources that suggest anything strange about 529 CE. Yet, only months later, Baillie drew on a growing quantity of tree ring data to suggest that both newly discovered eruptions be moved forward by six or seven years. This adjustment offered an explanation for the unusual tree-ring signals he had highlighted in the early 1990s.
Tree ring data significantly altered scientific understandings of what happened in the sixth century. Independently of texts and ice, tree rings suggest a major disturbance in 536 CE. Tree ring data, unknown to Stothers and Rampino in the 1980s, give perhaps the best record of the sixth-century event. They give annual information with an objectivity that sixth-century historians cannot match. Together, they have a temporal and spatial "awareness" no written source can rival.
Mediterranean texts describe the 536 event as 12 or perhaps 18 months long, but Baillie surveyed trees from Ireland, Germany, Scandinavia and the U.S.A. that clearly show that the event lasted for roughly a decade. Tree rings also demonstrate that the 536 event was not a Byzantine oddity. Rather, it was vast: hemispheric or even global. Trees also reveal not one steady stretch of poor growth but a marked departure from normal growing conditions, with acute troughs and peaks. Some scholars therefore believed that a cluster of stratosphere-clouding phenomena were to blame, not a single cataclysm. The first nadir was in 536-537 CE, while the second, and more pronounced, was in 540-541 CE. More recent tree ring studies have highlighted a third low in 546-547 CE. This one, and another in the early 550s, were already visible in Baillie’s original work, but they were not much discussed.
Over the last twenty years, tree ring studies have confirmed that the 536 event was hemispheric, and at a point global, and that it lasted for more than a decade. Multiple tree ring temperature reconstructions have found several of the coldest growing seasons (typically June-August) of the last two (or, in some cases, seven-and-a-half) thousand years fall within the sixth-century downturn.
A few examples: a 1993 paper identified 536, 535, and 541 CE as the second, third, and fourth-coldest growing seasons in a 2,000-year-long chronology from Sierra Nevada. A 2001 paper used a Mongolian tree ring series that was nearly as long, and found unusually chilly temperatures from 536 to 545 CE, with low points in 536 and 543 CE. A 2015 study used a composite northern hemisphere chronology stretching back to 500 BCE, and established the successive decades of 536-545 and 546-555 as the coldest and tenth-coldest decades in the series. According to the same series, six of the thirteen coldest years between 500 BCE to 1250 CE happened during the sixth-century climatic downturn.
The "Baillie bump," the forward-pushing of Larsen's eruptions (and now most first millennium eruptions detected in ice), placed major volcanism at each of the cooling episodes identified in tree ring data. Michael Sigl and a team of scientists recently included these results within an important synthesis of glacial volcanic eruption chronologies. It is still not clear which volcanoes erupted in 535/536 and 539/540 CE, but a cluster of volcanoes seem to have caused the downturn.
Still, there may be room to doubt whether Cassidorus and company took in a hemispheric event in 536 CE. They may well have witnessed a local disturbance. Procopius has Vesuvius bubbling, but not erupting, in 536 CE. Whether this ‘extinguisher or all things green’ erupted around then - or perhaps another nearby mountain - we do not know. Minor, nearby volcanism may have coincided with a much larger, distant eruption. One would have veiled Mediterranean skies, while the other marked the world’s trees. Tree rings from Constantinople’s hinterland may support this theory, since they have failed to reflect a major change in growth from 536 to 550 CE.
Of course, it may still be that an impactor near-simultaneously fell to Earth from space. Dallas Abbott and her team have recently found iron oxide, silicate spherules, and other ejecta indicators in the melt-water of a portion of a sixth-century Greenlandic ice core. They interpreted a high concentration of calcium as calcium carbonate, a main component in seashells, and detected tropical aquatic microfossils: a first for Greenlandic ice. It is evidence for an impact at sea, which then sent marine aerosols into the stratosphere.
For years, the 536 event or 536-550 CE downturn figured as a particularly cold stretch (in fact the coldest) in a long cool phase that set in more than a century before 536 CE and has many names: The "Vandal Minimum," the "Early Medieval Cold Period," or the "Migration Period Pessimum." Very recently, a multidisciplinary study concluded that that the 536-550 event triggered a longer cold period within this Minimum. They call it the "Late Antique Little Ice Age," and argue that it was possibly even chillier and more unstable than the better-known early modern Little Ice Age.
Did this cooling have profound consequences for sixth-century societies? Maybe, yet historians came to the 536 event rather late. In 2005, historian Antti Arjava wrote an interdisciplinary appraisal of the evidence for a sixth-century cooling event. Aside from Arjava, the few historians who have wrestled with the clouding have not attempted a complete or current synthesis of the written and scientific evidence. Arjava's paper has therefore served as the main conduit for historians and archaeologists for the science surrounding the 536 event. However, Arjava wrote his paper in the years when scientists could not match the event with a volcanic eruption. The paper plays up the cloud’s mysteriousness, and diminishes its extent and impact. A reading of John the Lydian’s account, one fuller and closer than that offered by Stothers, led to the conclusion the event was Mediterranean specific, more of a fog than a veil, and damp, not dry. That and the lack of consistent evidence for poor harvests and food shortage in the 530s suggested the cloud had little effect on contemporary societies.
Much has changed since 2005. It is more difficult now to diminish the downturn or doubt that it triggered a marked, though temporary, demographic contraction in many regions of the world through its effects on plants. However, minimalist readings remain popular. They are still, if mostly through Arjava, a reaction to a pair of catastrophist books on 536 published in 1999 by Keys (Catastrophe) and Baillie (Catastrophic Encounters with Comets). The books argued for far-reaching and at times unfathomable historical consequences from mystery clouding, from Teotihaucan’s fall to China’s reunification, from Islam’s emergence and Charlemagne’s birth to England’s colonization of North America and Japan’s modern nation state. A reluctance to engage with the palaeoclimate sciences and a willingness to write nature out of history have allowed historians to dismiss the significance of the 536 event for contemporary peoples.
Recently, more scientifically-minded historians, such as Michael McCormick, have offered more appropriate (if maximalist-leaning) narratives, in which cooling had moderate implications for sixth-century peoples. A vast, near-unparalleled environmental event need not have cataclysmic consequences to warrant study. Histories of resilience and adaptation to sudden and dramatic climate change should be as important and intriguing as histories of failure and collapse. This is clear in new work on the effects of the downturn, from the Yucatán to Fennoscandia, which emphasizes coping strategies and a certain hardiness in those that lived beneath the veils.
Mayan Calakmul’s largest structure, Structure II. With roughly 6,200 constructions spread out over about 30 square kilometres in the late Classic period, the city of Calakmul (now in Mexico’s Campeche State) experienced rapid growth during the sixth-century "hiatus." This interval of debated tumultuousness between the early and late Classic phases saw a leveling off (or decline) in stelae and monumental building at several Mayan locales as well as (perhaps dramatic) population contraction. Richardson Gill (in his The Great Mayan Droughts) argued the downturn caused this break in activity and drawing on palaeoclimatology (from other world regions) assigned the hiatus a firm start date of 536 CE. Built atop a preclassical stucco-decorated plaza, Structure II was an important building throughout the Classic period. Fodor’s Cancún, Cozumel, Yucatán Peninsula recommends a stop at the ‘vast’ and ‘lovely’ but little-visited Calakmul, which in its "heyday" (between 542 and 695 CE) numbered at least 50,000 people. A climb up the pictured pyramid allows for a "soaring vista."
Although not everyone would have come out from under the dust worse off, it is important to not let the pendulum swing back too far. After all, there are indications from across Eurasia of subsistence crises. Read together, these reports suggest a rather uneven occurrence of downturn-triggered crop failure and genuine famine. That clouding density and duration undoubtedly varied, and people were not everywhere equally vulnerable, might account for this patchiness. So too the concurrence of other natural and cultural pressures in some areas.
It should be emphasized that large eruptions do not simply chill the world. The effects on weather and climate are non-uniform. They are regional and can differ markedly, as Pinatubo and Tambora have shown. Tropical eruptions, such as the 539/540 event, also exercise a different force on climate than high latitude Northern Hemispheric ones, like 535/536. For instance, major near-equatorial volcanism is known to cause winter warming in North America, Europe, and Russia, but winter cooling in Western and Eastern Asia. Extratropical Northern Hemispheric volcanism cools hot and cold seasons alike. Seasonality matters too. That high latitude eruptions seem to be more impactful if they occur in summer could indicate that the 535/536 eruption happened in that season.
A few contemporary reports of despair and devastation seem hyperbolic. Did Italian mothers really eat their daughters? Did three quarters of the population north of the Yellow river really die off? Yet neither they, nor less-sensational descriptions, should be written off as lacking any grounding in the immediate post-eruption reality. Most sixth-century societies were able to absorb one bad year, but very few were able to absorb two or three. Back-to-back(-to-back) years of poor growing conditions, caused by a sharp cooling of average temperatures, were certain to take a toll.
An eruption cluster - multiple Tambora-like events within a few years of each other - caused the mid sixth-century downturn. Whether an impactor was roughly coincident is uncertain. The variability of the effects of both eruptions on climate and the extent and regionality of the loss of life are uncertain as well. Sixth-century cooling may well have helped cause the outbreak of the "Plague of Justinian" - the so-called "First Bubonic Plague Pandemic" - with profound demographic consequences. This link, and other enduring mysteries of the sixth-century downturn, will be the subject of a future article on this site.
Dr. Newfield will write a synthesis of the scholarship on sixth-century cooling, like this but more complete, in the The Palgrave Handbook of Climate History, edited by Franz Mauelshagen, Christian Pfister and Sam White.
D. Abbott et al, ‘What Caused Terrestrial Dust Loading and Climate Downturns between A.D. 533 and 540?’ Geological Society of America Special Papers 505 (2014).
A. Arjava, ‘The Mystery Cloud of 536 CE in the Mediterranean Sources’ Dumbarton Oaks Papers 59 (2005).
M. Baillie, ‘Dendrochronolgy Raises Questions About the Nature of the AD 536 Dust Veil Event’ The Holocene 4 (1994).
M. Baillie, ‘Proposed Re-Dating of the European Ice Core Chronology by Seven Years Prior to the 7th Century AD” Geophysical Research Letters 35 (2008).
U. Büntgen et al, ‘2500 Years of European Climate Variability and Human Susceptibility’ Science 331 (2011).
U. Büntgen et al, ‘Cooling and Societal Change during the Late Antique Little Ice Age from 536 to around 660AD’ Nature Geoscience 9 (2016).
B. Dahlin and A. Chase, ‘A Tale of Three Cities: Effects of the AD 536 Event in the Lowland Maya Heartland’ in G. Iannone ed., The Great Maya Droughts in Cultural Context: Case Studies in Resilience and Vulnerability (University of Colorado Press, 2014).
R. Dull et al., ‘Did the Ilopango TBJ Eruption Cause the 536 Event?’ American Geophysical Union Fall Meeting 2010 Abstract V13C-2370.
C. Hammer et al, ‘Greenland Ice Sheet Evidence of Post-Glacial Volcanism and its Climatic Impact’ Nature 288 (1980).
C. McKee et al, ‘A Revised Age of AD 667-699 for the Latest Major Eruption at Rabaul’ Bulletin of Volcanology 77 (2015).
L. Larsen et al, ‘New Ice Core Evidence for a Volcanic Cause of the A.D. 536 Dust Veil Event’ Geophysical Research Letters 35 (2008).
J. Luterbacher and C. Pfister, ‘The Year Without a Summer’ Nature Geoscience 8 (2015).
M. McCormick et al, ‘Climate Change During and After the Roman Empire: Reconstructing the Past from Scientific and Historical Evidence’ Journal of Interdisciplinary History 43 (2012).
E. Rigby et al, ‘A Comet Impact in AD 536?’ Astronomy and Geophysics 45 (2004).
A. Robock, 'Perspectives, Pinatubo Eruption: The Climatic Aftermath' Science 295 (2002).
A. Robock, ‘Volcanic Eruptions and Climate’ Reviews of Geophysics 38 (2000).
M. Sigl et al, ‘Timing and Climate Forcing of Volcanic Eruptions for the Past 2,500 Years’ Nature 523 (2015).
R.B. Stothers and M.R. Rampino, ‘Volcanic Eruptions in the Mediterranean before A.D. 630 from Written and Archaeological Sources’ Journal of Geophysical Research 88 (1983).
R.B. Stothers, ‘Mystery Cloud of AD 536’ Nature 307 (1984).