Christopher S. Kelly, Brown University, The Dwight-Englewood School.
The Lukonzo word [spoken by the Bakonzo people] for their place, Rwenzururu, first misheard and mis-transcribed by H.M. Stanley in 1889, means the Place of Snow; and whether it is the reality or the symbol, Nzururu, snow is and remains the presiding deity.
Mount Emin. Mount Baker. Mount Stanley. It is rare for a location to excite so many disparate sensibilities, but the post-colonial scholar, glaciologist, botanist, and climate scientist find themselves welcome bedfellows in the Rwenzori Mountains in tropical central Africa, straddling the border between Uganda and the Democratic Republic of Congo (DRC). Even as far afield in time and space as ancient Greece, philosophers trafficked in rumors that the Nile Headwaters hosted Ptolemy’s snow-capped “Mountains of the Moon.” Equally famous today is the gigantism reached by floral species of heathers, senecios, helichrysums, and lobelias — some reaching heights of 12 meters.
Though far-flung from Eurasian loci of global power and empire, the mountains were named in ways that emblemize the crossroads of colony and metropole, from the brutal journalist-colonist Sir Henry Morton Stanley to Emin Pasha himself. More recently, the Rwenzururu separatist movement has been arguably the most persistent such conflict in post-colonial Africa. In this piece, I will illuminate the cultural significance of the mountains and glaciers within African and European history to contextualize the deep impact accompanying today’s rapid climatic changes. Admittedly, my motivations were aroused by close proximity to scientists presently working on the Rwenzori glacial and climate history—specifically, by a recent study just this year, which suggests that the scientific community may have grossly underestimated potential temperature change over these high mountains under current global warming (Loomis et al., 2017).
The Roads to Rwenzururu
Histories in brief are precarious enough to begin on most continents, let alone our ancestral birthplace. But here, it suffices to relay the most recent peopling of the Rwenzori region, its foothills, and the Semliki Valley in the modern eastern Democratic Republic of the Congo (DRC). Beginning in the 7th century and continuing to just prior to the colonial period, waves of emigration out of the Sudan populated the Semliki Valley. Subsequent migration to the mountains seem to have coincided with environmental pressures, for example in the 1880s and 1890s when disease, drought, and state-level violence plagued the Semliki. Indeed, contrary to early colonial musings, the isolation of the modern Bakonzo people—the predominant tribe on the eastern flanks of the Rwenzori— is not ancient, but was instead prompted by an ascendant Toro state propped up by British colonial rule (located east of the mountains) and the massive loss of life due to warfare and disease in the Busongora plains and the Semliki Valley (adjacent lowlands) (Syahuka-Muhindo, 2007).
Today, the Bakonzo people primarily farm the eastern slopes. This practice signifies a split between the Bakonzo and other denizens of the Lakes region. Indeed, the economic consequences constitute one lens through which to understand the eventual unrest of the 20th century. One can trace the divergence back to the emergence between 800 and 1300 CE of specialized herding and banana cultivation in the Great Lakes region. These revolutions may have had an environmental origin, because dry periods would have forced pastoral productivity out of otherwise marginal land (Pennacini, 2007). Subsequently, new modes of production translated into farmer-herder client relationships for much of the broader Lakes region save the Bakonzo, Banande, and other nearby groups for whom the high altitude was not conducive to pastoralism.
When the British Protectorate of Uganda resurrected the Kingdom of Toro, the Bakonzo people were reluctant to enter the de facto monetary economy in a subservient position, and tensions flared. In combination with rampant disease in the early 20th century, a crippling British tax system, and exclusion from advancement within scholastic colonial education, conditions deteriorated to the point where Bakonzo gave flight. This exodus led many across the border to the Belgian Congo—a chilling punctuation mark given King Leopold II’s rightly earned infamy as purveyor of mass terror and slave-based economy in the Congo Free State (Hochschild, 1999). When the situation in the Rwenzori became untenable, the Bakonzo rebellion broke out against the Toro polity in the 1920s and 1960s. This culminated when the Rwenzori peoples unilaterally declared independence from Uganda in 1962 (Pennacini, 2007).
More than Mountains
For imperialists setting their eyes on them for the first time, the Rwenzori mountains prefigured into an already circumscribed map of geographical, cultural, and racial mythos. In the voracious Zeitgeist of late 19th century colonial ambition, the search for the mythical “single origin” of the Nile River became a quest not only to solve a long-standing geographical mystery, but to identify the origins of Western civilization (Wittenberg, 2007). Simon Schama clarifies that “rivers took on metaphor and coursed “as lines of power and time carrying empires from source to expansive breadth”(Schama, 1995).
When Henry Morton Stanley “discovered” the Rwenzori mountains in 1888, he took great scientific pains to show that they were indeed the primeval source of the Nile, and couched even their cultural surroundings within ancient Egypt. This would be an inexplicable falsehood without the context of European conceptions of Africa as a world apart, dark and barren and uniquely outside of civilization, and the mythological backstory in which the Rwenzori were already, in the minds of learned Europeans, connected to their own cultural development in antiquity. In other documents from the time period, “white snow” in “Darkest Africa” (the title of Stanley’s reports documenting his traverse) was irreconcilable without invoking separate geographical and cultural provenance (Stanley, 1890; Wittenberg, 2007). The first European woman to reach the alpine zone, Ruth Fisher, described Rwenzori as “the one unsullied and impregnable witness of holiness and purity to God, in a land where darkness has reigned, and the storms of passion, vice, and barbarity have laid desolate” (Fisher, 1919). To Fisher, and many in the colonial project, the mountains represented superior benevolence and nobility, concordant with Europe and European ideals (Wittenberg, 2007).
Others related the cooler temperatures and grassy fields of the alpine environment more explicitly to good health —for Europeans, that is. A prominent work of colonial fiction from 1906 by the British politician, colonialist, and author John Buchan imagines that “[mountains] will be what Simla is to India, the workshop of government…they are in another climate, and give a tired man the moral and physical tonic he needs” (Buchan, 1906) Another passage from the same work expounds that:
If only each hot country had been given a habitable mountain, they would be the only places in the world to live in. On the ordinary upland you dominate the flat country because you are higher up, but here we also look down on the plain because we are wholesome and cool and sane and they are fevered. We are a lighthouse to the whole of Equatoria, and if there were fifty other lighthouses in the Empire there would be no tropical problem. (Buchan, 1906)
Conversely, for the inhabitants of the mountains, the symbolism is equally poignant, but unsurprisingly occupies a wholly different cosmology, focused on spirituality and fertility. The word Rwenzururu itself means roughly “the place of snow,” and some Bakonzo interpret the ice as the frozen sperm of the mountain-dwelling god Kitasamba (Pennacini, 2007). Central to this belief system is the fertilization of Earth and Konzo society by the yearly snowmelt (Pennacini, 2007). As such, the icy mountains themselves are inseparable from Bakonzo belief systems, especially their embitha —"that unspoken sense of unity and uniqueness” (Stacey, 2007). For those who have worked in the mountains with the Bakonzo people, it is easy to testify to the sanctity conferred on the snowscape.
Brown University paleoclimate scientist Jim Russell explained to me that:
Each time we visit the mountains our Bakonzo guides explain to us the rules of the mountains which are imbued with a respect for the space. No pointing, no whistling, no singing. Rules are especially strict when it comes to water: no bathing, and many times when we have gone out on a lake in a boat our guides will offer food to the waters. (Russell, 2017)
Two historical European depictions of Africa showing the Rwenzori before it was actually spotted by Europeans. On the left from 1513, Martin Waldseemuller shows the Greek-fabled Mountains of the Moon giving rise to the more northerly African rivers as the only geographic feature in “Dark Africa” (citing interpretation from (Wittenberg, 2007)). On the right is a reproduction (of unknown year to this author) of Ptolemy’s figure included in "Geographia." In this work, much of North Africa is complete, if incorrect, including the then-hypothesized Mountains of the Moon. It is intriguing to compare/contrast these maps in their detail and degree of conjecture.
Ice in Retreat
But the natural state of the region is in dramatic flux, primarily a result of human activity. These mountains, which loom over 5,000 meters (well over 16,000 feet), host equatorial glaciers that are in rapid decline; since 1900, East African glaciers have lost over 80 percent of their surface area (Hastenrath and Kruss, 1992; Thompson, 2002). The full social and natural ramifications of this loss are still not clear, but they could well be disastrous. Firstly, modern foreign tourism revolves around ice climbing and observing the glaciers. In 1991, the Rwenzori Mountains National Park was established, and the Bakonzo were banned from hunting in the mountains by the Ugandan state. In “exchange” for these restrictions, guides and porters for tourists entering the mountains were to be hired solely from the local communities on the mountain slopes, chief among them the Bakonzo (Russell, 2017). The disappearance of the glaciers may impact tourism in ways that call this agreement into question.
Secondly, glacier retreat is known to strongly impact water resources in some mountain glacier regions, such as the Andes (Baraer et al., 2012; Mark, 2008; Mark et al., 2010). Glaciers act like a dam, accumulating snow in the wet season and releasing it as meltwater in the dry season. This buffers against strongly seasonal flows. Glacier retreat in the Andes is associated with an increase in the seasonality of river flows and a slight increase in the mean flow rate (due to the melting of “ancient” ice). Recent research suggests that snowmelt may not impact local river flows that provide water to indigenous communities as much as it has in the Andean highlands (Taylor et al., 2009); nevertheless, changes to the alpine lakes and surrounding ecosystems are likely to occur given the combination of melting ice and warming (Panizzo et al., 2008).
Finally, local environmental risks, such as slash-and-burn techniques, exacerbate threats to an ecosystem already fragile to climate change, and pose a barrier to ecotourism endeavors. Recent disasters drive this point home, such as the fire outbreak on Mt. Rwenzori in 2012 and the subsequent Kilembe Flood of 2013 (IFRC, 2014; Misairi and Ninsima, 2012; UNESCO, 2012). In this case, the wildfires in the alpine zone weakened the water holding capacity of the upper mountain valleys, which then flooded into the lowlands during the next rainy season. In sum, this environmental “moment” and the future of the Rwenzori motivates scientific efforts to probe geological, glaciological, and historical records for patterns that have governed the history of ice and environment in the Rwenzori.
Learning from Past Climate Change
The last instance in Earth history to experience appreciable changes in temperature with rising CO2 was during the global thaw following our Earth’s most recent glacial time (~20,000 years ago continuing into the earliest Holocene ~11,000 years ago) (Clark et al., 2012), making that time slice a potentially helpful scientific analog. The Rwenzori Mountains are no exception. Moreover, scientists are interested in specifically how the past local climates and glacier extent of tropical high-elevation belts differed from the present under different background climatic conditions, and what lessons we might glean from those dynamics under the future regime of persistent global warming. Accordingly, scientific teams are actively working to understand both Rwenzori environmental change during across the “deglaciation” (warming following the glacial state), as well as the last glacial climate itself to probe differences that may inform understanding about the future (albeit a different sign of temperature/climate change). .
To study such past climates, scientists build “proxy” climate archives to reconstruct the extent of ice, but also the factors behind glacial expansion, namely temperature (cold) and precipitation (dry). Similarly, glacial retreat can result from warm temperatures or regionally wet episodes, or some combination thereof. Studies of Rwenzori glacial moraines—accumulations of glacial debris—as well as lake sediment records suggest that ice expanded during cooler and drier conditions at the same time as Earth’s last glacial maximum (Kelly et al., 2014). A new high-profile publication by Loomis et al. (2017) based on geochemical reconstructions of past lake temperatures at multiple elevations has revealed that this cool time was enhanced in the Rwenzori via amplified cooling with elevation during this global glacial maximum.
Uncovering recent changes in glacial extent has been perhaps even more fraught, since investigation requires deconstructing the colonial archive and the assumptions embedded in it—which even when approximately accurate, lack a long-term perspective. For example, the present episode of glacial retreat had been thought to commence in ~1880 due to the waning wet period in the latter half of the 19th century in the Great Lakes region, as well as the legacy of colonial observation, which intensified in the late 1880s (Hastenrath and Kruss, 1992; Mölg et al., 2003). But in 2008, a research group found that siliciclastic material in lakebed sediments across the Rwenzori region correlated with the glaciation of those lakes (Russell et al., 2009). Because siliciclastic content was more or less stable from 1200 until 1870, the scientists concluded that for multiple centuries, fluctuations in ice have been relatively small in comparison to those experienced today (Russell et al., 2009). In this way, the European records of shrinking ice are not at all indicative of “normal” conditions over the 800 years. Finally, the timing of initial retreat is crucial; 1870 falls in a regionally wet time in the Rwenzori, suggesting that assumptions about the timing of glacial retreat in the region made on the basis of late ninteteenth-century observations are inaccurate.
Replotted %siliciclastic data from Lakes Upper Kitandara and Lac du Speke, with permission. As a proxy for the extent to which a lake was glaciated, these records depict relative glacial stability in the centuries leading up to the present decline beginning ~1870 (fluctuations in the early part of the last millennium should be interpreted with caution; see original scientific work). (Russell et al., 2009)
Taken altogether, some consensus is emerging that Rwenzori glaciers are melting today mostly as a result of rising air temperature. A recent study has revealed that temperatures in the tropics increased more dramatically at high elevations compared to low elevations during the last glacial maximum (Loomis et al., 2017). If we can expect the same today, in our warming world, it would spell amplified warming in the high-altitude Rwenzori mountains—warming that the glaciers likely cannot withstand. Indeed, a prominent study of ice core records from Kilimanjaro gives tropical African glaciers only another ten years (Thompson, 2002).
Conclusion: Snow-Capped No More
An essay on the Rwenzori in the Western imagination concludes:
Ironically, the hopes of local Rwenzori communities are not only linked to the return of peace [following conflict in the Great Lakes region], but also a continuation of colonial myths about the Mountains of the Moon. In order to draw tourists and attract development, the Rwenzori will in all likelihood continue to be inscribed with a Western history that obscures local cultural knowledge, traditions, and histories.” (Wittenberg, 2007)
Climate change may call this obfuscating history and the tourism it promotes into question. As time passes, the disappearance of the glaciers will render Stanley’s “lofty mountain king, clad in its pure white raiment of snow” a more distant memory (Stanley, 1890). Just what this portends for Bakonzo cosmology, mountaineering, and tourism is today unclear. Yet it is no small irony that modern Rwenzori could face grave climate challenges from the conquest, dominion, and industrialization of the world by the Global North—the same processes that gave rise to a dependence on foreign capital in the first place in this unique mountain kingdom.
In article after article, academics, policy analysts, and journalists have told a similar story: climate change, by melting Arctic ice, is unlocking resources that could soon trigger war in the far north. They argue that the race to extract the vast reservoirs of oil and natural gas that lie under the vanishing ice – up to a quarter of the world’s undiscovered fossil fuel reserves, by some estimates – will likely provoke hostilities between Russia, the United States, and other nations with claims to the bonanza. The overall failure of early drilling efforts in the Arctic, it seems, is of little consequence.
These claims add a new twist to a vast and growing body of scholarship that links climate change to conflict. Academics working in this area often begin their work by showing that past climate changes reduced – rather than increased – the regional availability of some crucial resource, such as water, or grain, or fish spawning grounds. They then use diverse methods to trace the destabilizing social and political consequences of these resource shortages. Environmental historians, for example, have argued that falling temperatures and changing precipitation patterns in the seventeenth century led to poor grain harvests and famines that provoked rebellions in diverse societies the world over. More controversially, scholars in many disciplines have linked human-caused global warming to droughts that encouraged migration and ultimately conflict in twentieth-century sub-Saharan Africa. Far less attention has been directed at the ways in which more abundant resources might incite violence either within or between states.
In fact, those who make claims about the inevitably more violent nature of the future Arctic have rarely thought to consider the history of climate change and conflict in the far north. Yet violence in the Arctic has long coincided with volcanic eruptions and fluctuations in solar activity that altered regional temperatures and in turn the availability of crucial resources. In the early seventeenth century, for example, the Arctic cooled sharply and then warmed slightly just as Europeans discovered, hunted, and fought over bowhead whales off Spitsbergen, the largest island of the Svalbard archipelago. Oil, bones, and baleen from bowheads became crucial resources for the economies of England and the Dutch Republic.
Diverse manifestations of climate change in the Arctic and Europe influenced how easy bowhead whales were to hunt, the profits that could be fetched by their oil, the proximity of whalers to one another, and the ability of whalers to reach the far north. Skirmishes within and between whaling companies operating from rival European nations reveal that climate change can affect both the causes and the conduct of conflict in diverse ways, even in environments it transforms on a vast scale. There is nothing inevitable or simple about the ways in which climate change influences human decisions and actions.
This history would be hard to investigate without new climate reconstructions compiled by scholars in many different disciplines, using many different sources. In 2014, researchers drew from natural and textual sources to create a sweeping new reconstruction of average Arctic air surface temperatures over the past 2,000 years. It confirms that the Arctic was overall very cold in the seventeenth century, but also that it warmed slightly towards the middle and end of the century. Temperatures in the Arctic therefore roughly mirrored those elsewhere in the Northern Hemisphere during the chilliest century of the “Little Ice Age,” a cooler climatic regime that endured for roughly six centuries. The extent and distribution of sea ice in the Arctic – the most important environmental condition that whalers coped with – would have responded to even subtle changes in average annual temperatures.
Yet these very big trends do not tell us exactly how climate change transformed environments around Svalbard. Local temperature trends do not always precisely mirror regional or global developments, and anyway the distribution and extent of Arctic sea ice registers more than just the warmth or chilliness of the lower atmosphere. Ice core and model simulation data both suggest that air surface temperatures around Svalbard were quite cool in the early seventeenth century and somewhat warmer in the middle of the century, at least in summer. Lakebed sediments, by contrast, suggest that glaciers across Svalbard actually retreated beginning in around 1600 owing to changes in precipitation, not temperature, which may have reduced the local frequency of storms that can break up sea ice. Moreover, sea surface temperatures – which also influence sea ice – were quite warm off the west coast of Spitsbergen, the largest island of the Svalbard archipelago, for much of the seventeenth century, although they were very cold off the northern coast.
Overall, it seems safe to conclude that, in the summer, temperatures around Svalbard roughly mirrored those of the broader Arctic in the seventeenth century. Warmer currents may have brought more nutrients to the region and probably reduced the extent of local sea ice, although a reduction in storm frequency would have preserved the ice that was there. In any case, most Arctic sea ice melts in the summer before reaching its minimum annual extent in the fall, which means that summer weather and currents had the greatest impact on the extent of ice in the Arctic north of Europe. Because sea ice retreated from Svalbard in the summer, it was also the crucial season for whaling.
If the local consequences of global climate changes can be counterintuitive – that warming current off Spitsbergen, for example – so too can human responses. One might assume that climatic cooling would have dissuaded explorers, fishers, and whalers from entering the Arctic. Instead, European sailors found and then started exploiting the environments on and around Svalbard in the late sixteenth and early seventeenth centuries, just as volcanic eruptions led to arguably the coldest point of the Little Ice Age in the Northern Hemisphere. In previous work, I have shown that climate changes in this period interacted with local environments to leave just enough sea ice in the Arctic north of Europe to redirect expeditions in search of an elusive “Northern Passage” to Asia. Dutch and English sailors struggling to find a way through the ice ended up discovering Spitsbergen and the many bowhead whales off its western coast. Bowheads are relatively docile, float on the surface when killed, and have very thick blubber that can be turned into oil. Beginning in 1611, they started attracting Dutch, English, and Basque whalers.
Other scholars have argued that cooling in the early seventeenth century led bowhead whales to congregate along more extensive sea ice near Spitsbergen, which made them easier to hunt for whalers. By contrast, whales dispersed as sea ice retreated in the warmer middle of the seventeenth century, which made them harder to hunt. There does seem to be a statistically significant correlation between ice core reconstructions and model simulations of summer temperatures around Spitsbergen on the one hand, and the annual whale catch on the other. Iñupiat whalers consulted by our own Bathsheba Demuth, however, report that bowheads in the Berring Sea are not social enough to gather in huge groups. Perhaps bowhead culture was different in the Atlantic corner of the Arctic when whale populations were much higher than they are today.
The apparent correlation between surface air temperatures and the whale catch around Spitsbergen provides our first point of entry into relationships between climate change and conflict in the far north. From the first years of whaling around Spitsbergen, two companies – the Dutch Northern Company, and the English Muscovy Company – emerged as the leading players in the Arctic whaling industry. The governments of England and the Dutch Republic had granted these companies monopolies on whaling operations, but they were resented by merchants and mariners who preferred to operate independently. After around 1625, as bowhead whales dispersed amid warming temperatures, competition between Dutch whalers devolved into piracy. Many conflicts involved whalers who sailed either for the Northern Company or for themselves, although even some Company whalers hid the best hunting grounds from one another. In these circumstances, the governing body of the Dutch Republic rescinded the monopoly of the Northern Company in 1642.
From the beginning, competition between English whalers assumed an even more brutal character. The Muscovy Company took an uncompromising stance towards English interlopers, who responded in turn. In 1626, for example, whalers aboard independently-owned vessels destroyed the Company’s station at Horn Sound, Spitsbergen, after they had been harassed by Company ships. Not surprisingly, petitions submitted to the English Standing Council for Trade in 1652 reveal that small groups of English merchants also sought to overturn the monopoly of the Muscovy Company. Individual merchants insisted that the Company could not adequately “fish” the territories over which it held a monopoly. The Company responded that whalers in the employ of those merchants had interfered with the activities of its sailors and stolen whales they had killed.
Warming temperatures that reduced the extent of pack ice and encouraged whales to disperse may well have encouraged competition and conflict between whalers belonging to the same nationality. Bizarrely, the whaling industry also responded to fluctuations in the supply of rape, linseed and hemp oils, which were less smelly substitutes to whale oil for fueling lamps or manufacturing soap, leather, or wax. Temperature and precipitation extremes that reduced the supply of vegetable oils naturally also increased the price of whale oils in the Dutch and English economies, and thereby the profitability of whaling. In the context of the Little Ice Age, the 1630s in particular were relatively warm across the Northern Hemisphere. The trusty Allen-Unger commodity database tells us that the price of linseed oil in Augsburg, for example, dropped sharply as average annual temperatures increased. Even the price of lamp oil – which would have also registered the price of whale oil – fell modestly in the same period. Could whalers in the 1630s and 1640s have vied with monopolistic companies just climate change both reduced the supply of their resource and increased its profitability?
We can sketch these relationships by mixing and matching different statistics from natural and textual archives. Detailed qualitative accounts written by whalers, however, reveal that climate influenced conflict in more complicated ways during the first decade of the Svalbard whaling industry. In that decade, whalers from several European nations – most importantly England and the Dutch Republic – employed experienced Basque whalers to kill bowhead whales, strip their blubber, and boil the blubber on the coast. Whalers would deploy boats from a mothership to kill small groups of whales. They would then establish temporary settlements on the coast to turn the blubber into oil that could be loaded into barrels and returned to the ship.
These techniques forced whalers from different nations to rove along the coast of Spitsbergen, which made it likely that they would encounter one another. Initially, the Muscovy Company falsely claimed that English explorers had found Spitsbergen, which meant that it alone had the right to hunt for whales off the island. The Dutch – who had actually discovered the island – insisted that whalers from all European nations should be allowed to fish off its coast. In 1613, a Dutch expedition under Willem van Muyden, the legendary “First Whaleman” of the Republic, reached Spitsbergen in late May and found the coast blocked by ice. After only two weeks, the retreating ice let his whalers enter a bay roughly halfway down the island, but a better-armed English fleet quickly spotted them. In subsequent weeks, the English harassed the Dutch whalers and stole much of their equipment and whale commodities. Yet the Dutch returned with naval escorts in 1614. After the English seized a Dutch ship in 1617, the Dutch arrived with overwhelming force in 1618 and killed several English whalers.
The worst skirmishes between Dutch and English whalers raged in years that were relatively warm across the Arctic and probably around Svalbard, despite the generally cooler climate of the early seventeenth century. In cold years, sea ice could have kept whalers working for different companies from lingering on the coast, where tensions simmered and eventually erupted into bloodshed. In any case, the Muscovy Company and the Northern Company eventually agreed to occupy different parts of Spitsbergen. The Dutch would claim the northwestern tip, where they established the major, fortified settlement of Smeerenburg: “blubber town.” The English, meanwhile, took the rest. The Dutch eventually benefited from being closer to the edge of the summer pack ice, where there were more whales to hunt.
Hostilities between the English and the Dutch in the volatile first decades of the Svalbard whaling industry convinced the Northern Company to keep a skeleton crew at Smeerenburg and nearby Jan Mayen island during the winter. If they could survive, they would keep Company infrastructure safe from springtime raids and provide valuable information about the region’s winter weather. In 1633/34, two groups of Dutch whalers overwintered at Smeerenburg and Jan Mayen. Regional summer temperatures may have been warming at the time, but winter temperatures across the Arctic were cooling, and 1633/34 was particularly cold. The Smeerenburg group survived the frigid temperatures and killed enough caribou and Arctic foxes to hold off scurvy. The Jan Mayen whalers endured until the spring, but they could not catch enough game to survive the ravages of scurvy. In 1634/35, the Northern Company tried again. This time, both groups died from scurvy, and the Smeerenburg whalers did not even make it to winter. Violent competition between whaling companies – plausibly influenced by warming summers – exposed whalers to a quirk in the climatic trends of the Little Ice Age in the Arctic: the big difference between summer and winter temperatures, relative to long-term averages.
Climate change also influenced hostilities between whalers by altering how easily they could reach the “battlefield” around Spitsbergen. In 1615, a year of typical chilliness during the Little Ice Age, the author of a Dutch whaling logbook reported that sea ice on June 7th blocked the crew’s progress towards Svalbard. The crew spotted a bowhead whale three days later, but ice kept them from pursuing. That evening, a storm rose just as they found themselves surrounded by sea ice. They tried to anchor themselves to an iceberg, but it shattered and would have destroyed their ship “had God not saved us.” The few surviving logbooks written by Dutch whalers also record trouble with ice in the warmer 1630s, yet it surely would have been harder to reach Svalbard and compete with English whalers in the first decade of the Arctic whaling industry.
Beginning in 1652, the Dutch Republic and England also embarked on hostilities in the North Sea region that would endure, with interruptions, until the Dutch invasion that launched the Glorious Revolution of 1688. During the three Anglo-Dutch Wars that raged in these decades, English and Dutch ordinance kept whalers from sailing to the Arctic or constructing new ships and equipment for the whaling industry. Sailors who might have served aboard whaling ships were urgently needed to crew the warships of the English and Dutch fleets. Many whalers also served as privateers, raiding merchant ships and convoys and then surrendering a share of the profits to their governments. Any whalers who set sail for the Arctic risked losing everything if discovered.
As I have written elsewhere, a cooling climate in the second half of the seventeenth century profoundly influenced naval hostilities between the English and Dutch fleets. By altering the frequency of easterly and westerly winds in the North Sea, it helped the English claim victory in the First Anglo-Dutch War but aided the Dutch in the Second and Third Anglo-Dutch Wars, as well as the Glorious Revolution. It probably shortened the First Anglo-Dutch War (1652-54) but lengthened the third war (1672-74). That, in turn, would mean that the manifestations of global climate change in the North Sea affected the opportunities for whalers to engage in hostilities in the Arctic.
After 1650, the character of hostilities between Arctic whalers changed dramatically. Cooling summer temperatures brought thick ice into the harbors of Spitsbergen, while the depletion of the bowhead whale population may have worsened the prospects of whaling near land. Whalers had to hunt further and further from the shore, and started processing their whales at sea. They abandoned settlements along the coast of Spitsbergen, which soon fell into ruin. Violence between whalers now took place exclusively at sea. The evidence is spotty, but privateers seem to have hunted whalers in the final decades of the seventeenth century. In 1692, Henry Greenhill, commissioner of the English navy at Plymouth, reported that two “Greenland Prizes” – whaling vessels captured off Spitsbergen – had been brought into harbor. Since England had allied with the Dutch Republic against France, these ships were probably French in origin.
The history of climate change, whaling, and violence in and around Svalbard during the seventeenth century is above all complicated, filled with surprising twists and turns. Climate change may have occasionally provoked violence, but it probably did so by reducing, rather than increasing, the accessibility of bowhead whales to whalers. More importantly and more certainly, it altered the character of confrontations between whalers in the far north. Moreover, its manifestations thousands of kilometers from the Arctic ended up having important consequences for hostilities in and around Svalbard.
These intricate relationships in the distant past should give us pause as we contemplate the warmer future in the Arctic. Global warming may indeed set the stage for war in the far north, but we have no way of knowing for sure. It is equally likely that climate change will provoke human responses that are hard to guess at present. In this case, we cannot use the past to predict the future, but we can draw on it to ask more insightful questions in the present.
Selected Works Cited:
Degroot, Dagomar. “Exploring the North in a Changing Climate: The Little Ice Age and the Journals of Henry Hudson, 1607-1611.” Journal of Northern Studies 9:1 (2015): 69-91.
Degroot, Dagomar. “Testing the Limits of Climate History: The Quest for a Northeast Passage During the Little Ice Age, 1594-1597.” Journal of Interdisciplinary History XLV:4 (Spring 2015): 459-484.
Degroot, Dagomar. “‘Never such weather known in these seas:’ Climatic Fluctuations and the Anglo-Dutch Wars of the Seventeenth Century, 1652–1674.” Environment and History 20.2 (May 2014): 239-273.
Hacquebord, Louwrens. De Noordse Compagnie (1614-1642): Opkomst, Bloei en Ondergang. Zutphen: Walburg Pers, 2014.
Hacquebord, Louwrens. “The hunting of the Greenland right whale in Svalbard, its interaction with climate and its impact on the marine ecosystem.” Polar Research 18:2 (1999): 375-382.
Hacquebord, Louwrens and Jurjen R. Leinenga. “The ecology of Greenland whale in relation to whaling and climate change in 17th and 18th centuries.” Tijdschrift voor Geschiendenis 107 (1994): 415–438.
Hacquebord, Louwrens, Frits Steenhuisen and Huib Waterbolk. “English and Dutch Whaling Trade and Whaling Stations in Spitsbergen (Svalbard) before 1660.” International Journal of Maritime History 15:2 (2003): 117-134.
Laist, David W. North Atlantic Right Whales: From Hunted Leviathan to Conservation Icon. Washington, DC: Johns Hopkins University Press, 2017.
McKaya, Nicholas P. and Darrell S. Kaufman. "An extended Arctic proxy temperature database for the past 2,000 years." Scientific Data (2014). doi: 10.1038/sdata.2014.26.
Dr. Josh MacFadyen, Arizona State University
When Monsanto spent $1 billion in 2013 to purchase Climate Corporation, its climate data, and its algorithms for using machine learning to predict weather, everyone from farmers and insurance companies to technologists and The New Yorker concluded that agri-business believed the climate science consensus: climate change is real and it introduces real risks to business. One century earlier, another major Western agri-business (Archer-Daniels-Midland or ADM) produced their own cutting-edge weather and crop forecasts, mainly in an effort to reduce the risk of what it called “weather markets.” By investing in environmental knowledge production, these companies revealed how they understood both local environments and international climate sciences.
Historians know a great deal about early amateur and state organized meteorology and climatology, and a range of new works are emerging on the science of forecasting. However, the climate looks different when we examine it from the private sector. Business historians do so using company records and within the context of the firm. My study of the economy of knowledge in agri-business focuses on Archer-Daniels-Midland Linseed Company (ADM), a notoriously secretive company that started in Minneapolis and has since become one of the big five multinational firms in the agrifood sector. Its primary interest in the early twentieth century was not the highly-processed corn and soybean commodities it is known for today. Rather, ADM specialized in another oilseed altogether – flax.
Flax had changed from a European crop grown for linen to a predominantly American crop grown in the temperate grasslands of the northern Great Plains and the Argentine Pampas. It was produced mainly for its seed (linseed), which was pressed to make linseed oil, the principal ingredient in paint. Flax was very popular with farmers in the northern Plains, because it matured quickly and could be planted on newly broken fields on the frontier. ADM had relocated to Minneapolis precisely because of this growing western supply chain. When they got there, they realized they had a lot to learn about anticipating the weather and forecasting production in this harsh new environment.
The history of ADM’s response to price volatility, supply chain problems, and trade policies tells us about the way businesses understand climatology and develop environmental knowledge. Like the Climate Corporation, ADM was predominantly concerned with risk mitigation. And where there was risk there was profit. The linseed oil business included some of the leading names in the chemical sector, including Lyman Brothers in Montreal, the Rockefellars’ American Linseed Oil, Sherwin Williams, and Spencer Kellogg and Sons. These companies were major players, and they purchased flax seed from for their oil and paint operations from whatever part of the world would supply it. In the late nineteenth century, that was predominantly the northern Plains, but in the early twentieth century they found emerging markets in Argentina, Uruguay, and some older flax producing regions in Eastern Europe and India.
As a relative upstart, ADM found a niche in the globalizing industry by providing crop and other environmental information to the trade. The big players bought flax seed and flax seed futures in a massive grassland frontier (the Northern Great Plains, the Canadian Prairies, and the Argentine Pampas) with limited knowledge of those regions’ agroecosystems and even less about their climates. This article argues that crop knowledge was extensive and growing in the late nineteenth century, but climate knowledge was limited and retreating, because of underfunding and spurious theories about solar radiation. Meteorological forecasts were only good for 48 hours, and although Farmer’s Almanacs were very popular, their forecasting methods were secretive and studies have shown that they were really no more accurate than a coin toss.
In my study, I make two other conclusions based on a content analysis of the firm’s semi-public market reports circulated between 1911 and 1925, with a five-year gap starting in 1918. The first is that ADM created an early version of the Climate Corporation, focusing its attention on the growing conditions and probable outcomes of the flax crop. The records show that the company virtually ignored the European and Indian crops, knowing that those would likely enter UK markets before reaching their North American clients. Most of their weekly synopses were about crop conditions in the northern Plains, drawing from a network of crop agents, elevator companies, and other intermediaries such as state flax scientists. However, their interest in Canadian conditions decreased, and they increasingly focused on Argentina over this period. By the 1920s, ADM was reporting on Argentina almost as frequently as it mentioned the US.
The second finding was that the futures markets were more closely connected to natural systems than some historians of these “incorporeal” commodities have argued. Information systems like ADM’s circulars developed almost real time networks of crop and climate knowledge, but after a certain point these agri-businesses, conceded that futures trading was “purely a weather proposition.” Commodity futures became highly risky as overlapping harvest seasons approached, linseed oil producers depleted their reservoirs, and buyers attempted to determine which crop and weather forecasts were most tenable. This is precisely where we would expect to hear corporations arguing over comparative meteorological systems and the reliability of Almanacs, but these topics were almost never mentioned.
ADM realized that in the period between sowing and harvesting, the price of flax was “a weather market.” Their records show that businesses in the grain and oilseed sector created extensive knowledge networks to gather crop and some climate information in almost real time. Unlike the meteorological offices or the almanacs, ADM aimed for the respect of a much smaller business circle, and they therefore maximized data and minimized predictions. They mentioned US weather in about half of their circulars (less for other countries) and they predicted weather in very few of those cases. They were more bullish with crop forecasts, but the circulars show that they rarely reported weather forecasts. The weather that they did report was current conditions, and it was mainly in regards to the Northern Great Plains crop during the critical maturing and harvest months (June–September).
As my longer article on ADM’s response to uncertain climates outlines, the company was deeply invested in place, and its business decisions were shaped in part by its longer commitment to the Northern Great Plains. Its larger role in the knowledge economy was influenced by its position on crop and climate science; the company distrusted government crop forecasts and disregarded meteorological forecasts. ADM’s respectability depended on accuracy, but as the almanacs (and recent politicians) show, you don’t need to be accurate to be popular.
When agri-business ignored early twentieth century climatologists and created their own knowledge products, they signaled a distrust in science that proved to be well founded. In recent decades, we are seeing a completely supportive message from the private sector. Business signals its knowledge about the environment at many scales, from local family farms to the United Nations Framework Convention on Climate Change. When Reagan-era Secretary of State, George P. Shultz, and Climate Leadership Council president, Ted Halstead, recently advanced what they call The Business Case for the Paris Climate Accord, their message was simple. Since top US businesses support the Paris climate agreement, Donald Trump should embrace the broad consensus of climate scientists and remain at the table in Paris. Granted, their argument ignored the ethical and other humanitarian reasons for stopping runaway climate change and mitigating the harmful effects it will have on the biosphere, but when even corporations like Monsanto are spending billions to mitigate the risks associated from climate change it’s time politicians listened to the deafening message coming from all sectors.
Joshua MacFadyen, “Long-range forecasts: Linseed oil and the hemispheric movement of market and climate data, 1890–1939,” Business History 59:7 (October 2017). Published online, April 2017. DOI: http://dx.doi.org/10.1080/00076791.2017.1304915
Dr. Kent Linthicum, Arizona State University
The recent bicentenary of the Year without a Summer (1816) has brought that unusual intersection of geological forces, changing climate, and human history into focus again. The radical cooling brought on by Tambora’s eruption seems especially significant as modern societies face their own dramatic climate change, albeit in the form of radical warming brought on by industrialization.
Tambora’s eruption in 1815 is the most recent seven on the Volcanic Explosivity Index (VEI). The VEI scale rates eruptions from zero to eight. VEI sevens erupt roughly one-hundred cubic kilometers of material and occur infrequently: the next most recent seven, after Tambora, erupted in 1257. The large amount of ejected material from Tambora’s eruption cooled Europe by 1-2 degrees Celsius on average. The cooling caused the subsequent summer of 1816 to be so cold that it was hardly a summer at all. In an era of increasingly warm summers, a cooler one might sound ideal, but chilly weather led to a food shortages and starvation throughout the northern hemisphere.
Between April and May 1816, "Bread or Blood" riots erupted across East Anglia as the price of bread surpassed the wages of agricultural and industrial laborers. While food riots had a long history in Britain, industrialization, enclosure, and globalization increasingly safeguarded the nation's food supply by the early nineteenth century.
The Bread or Blood riots reveal that climatic shocks could still provoke famine and rioting in the nineteenth century, even in the country that should have been least vulnerable to them. They also show that contemporary media depicted the rioters with disdain, in ways that probably worsened official responses to them.
At the close of 1815, the United Kingdom had ended its wars with France, yet it embarked on a long struggle with disastrous weather. After an “extremely changeable” January, February was “unseasonably warm and moist,” lifting hopes that the season's crops might recover. Yet The Observer reported that both industrial and agricultural laborers were in “extreme distress” already.
By early May, a “Monthly Agricultural Report” in The Observer explained that conditions had not improved because “sun and warm weather are the great wants.” Prices were on the rise because of increased demand, speculation, and poor harvests throughout Europe. East Anglia experienced a roughly 33% increase in the price of wheat between March and May. Laborers were incapable of affording the prices of food and became desperate. They needed to eat but had no money, so protest became their only option.
In the frigid spring of 1816, riots broke out around East Anglia. One of the first instances was on April 17th when a crowd assembled in Gedding and smashed some farming equipment. After that Wattisham, Hitcham, and Rattlesden experienced disturbances on April 24th; Needham Market and Swaffham Bulbeck on May 7th; Bury St. Edmunds on May 14th; Brandon on May 16-18th; Norwich on May 16-20th; Hockwold on May 17th; Feltwell on May 18th; Hockham on May 19th; Downham Market on May 20-21st; and finally Littleport and Ely on May 21-24th. On May 23rd soldiers and local militia arrived in Ely, and between then and the 24th, they forcefully suppressed the rioting. Despite the military presence, some rioting continued in East Anglia, but the Littleport and Ely riots were successfully subdued.
While the protestors had many reasons for agitating, their core motivation was survival. They demanded either food, money, a reduction in food prices, or all of the above. In Brandon, the protestors called for “Cheap Bread, a Cheap Loaf and Provisions Cheaper.” A woman at the protest reportedly demanded “Bread or Blood in Brandon this day.” One man admitted that the protestors “did not mean any injury but he could not live with his large family as things were, and they must have flour cheaper.” As many of the protestors were agricultural laborers, they broke agricultural machinery, presumably with the goal of taking back those jobs that the machinery would have eliminated.
The protesters felt they had no choice: they would have food or violence, because either way their deaths were imminent. William Dawson of Outwell, when asked why he was agitating, is reported to have said, “Here I am […] between Earth and Sky—so help me God. I would sooner loose [sic] my life than go home as I am. Bread I want and Bread I will have.” For the protesters, causing a disturbance was the only way to ameliorate their suffering. Yet not everyone perceived the disturbances as the desperate attempts of the poor to find respite from coming starvation. Some saw the riots as evidence of the moral failings of the lower classes.
“Economical humbug of 1816 or, saveing at the spiggot & letting out at the bunghole" (April 1816) by George Cruikshank. Here Cruikshank criticizes the government for what he perceives as an imbalance in spending. The Regent, Princess Charlotte, Lord Castlereagh and others are stealing public money for their own wants and desires, with very little money going towards “Public Service.”
The Times reported on the disturbances on May 21st, noting that the sheriff of Suffolk had arrived in London to request government aid to “restore tranquility.” The first disturbances, according to The Times, had been incited by “malicious [...] agents” who were likely “agricultural labourers.” While the paper acknowledged that the protesters demanded “a reduction in the price of bread and meat,” it still suggested that their protests had been illegitimate.
When the protests broke out again, The Times depicted the protesters as criminals and revolutionaries. They had apparently attacked the “houses of those persons who were obnoxious to them.” Protesters in one group carried a flag inscribed with “Bread or Blood” and spears. They “threatened to march to London.”
The Times reported on the 25th "that the disturbances in Norfolk and Suffolk are by no means at an end.” The paper detailed the movement of troops, and related a short narrative about a few magistrates who realized that the laborers’ wages were too low and raised them. This caused The Times to ardently hope that the changes made by these magistrates in Downham were “proof of considerate attention to the complaints of the lower classes [and] will excite a correspondent gratitude in the minds of the latter, and induce them to return to habits of peaceful industry and order.” The suggestion by the paper was that the onus was on the laborers to stop protesting because a few officials had responded to their concerns. In other words, the laborers should just wait, because the government would come to their aid.
A long article on May 27th dove into the economics of the issue. The Times weighed whether the government should step in to support local agriculture when manufacturers in the country were not interested in the product. The paper concluded that government should not intervene, and suggested that protestors are merely using the current high prices as a “pretense” for violence. The paper brushed off the concerns of the protestors in East Anglia, again suggesting that they were rioting for malicious reasons rather than desperation. The final report, on the 30th, reported the disturbances had ceased, thanks to the efforts of soldiers and the local militia.
The Times placed the blame for “much of the disorderly conduct” on the poor laws, a system of welfare for impoverished people in the United Kingdom. The paper suggested that the laws had led the poor to expect handouts, and when they did not get what they wanted they became unruly. The rioters were brought to trial between June 17th and 22nd. In the end five people were executed, five exiled to Australia for life, four exiled for a shorter sentence, and ten imprisoned for twelve months. Food prices remained high in England until 1820.
“The Elgin Marbles! or John Bull buying stones at the time his numerous family want bread!!” (June 1816) by George Cruikshank. Cruikshank criticizes the government again for spending money contrary to the public good. In this case purchasing the controversial Elgin Marbles from Lord Elgin. Screaming children in the image implore John Bull (a national personification of Great Britain like Johnny Canuck or Uncle Sam) saying "Don't buy them Daddy! we don't want Stones. Give us Bread! Give us Bread! Give us Bread!".
Humanity has long endured changes in Earth's climate. Today, many people in the developed world can, for the moment, insulate themselves from the worst consequences of a changing climate. Yet millions in the developing world especially do not have that luxury. The media can either encourage or discourage action to address their suffering.
In 1816,The Times’ reporting of the Bread or Blood riots reinforced the idea that the protesters were criminals and malcontents, that their demands were inappropriate or untimely. That reporting would only bolster the biases of those in control. So despite a compromise written up by the Ely magistrates to increase wages depending on the price of flour and the size of the laborer’s family on May 23rd, on May 25th Lord Sidmouth placed a one-hundred-pound bounty on those “unlawfully assembled” in the region.
The Bread or Blood riots are a reminder that climate insecurity has been the rule and not the exception in human history. Newspaper accounts of the riots reveal that the media not only described events but also helped shape them in ways that exacerbated the worst effects of climate change for the most vulnerable. Today, media depictions of citizens furious about their lack of clean food or water, protestors enraged by the seizure and pollution of their homes, and refugees displaced by drought and violence can similarly worsen the social consequences of global warming. We must have a media that fairly describes the impacts of climate on people around the world, and we must keep a critical eye on media in order to adapt to and perhaps mitigate climate change.
“Disturbances in Norfolk And Suffolk.” The Times, May 23, 1816, pp. 3. The Times Digital Archive.
“London, Saturday, May 25, 1816.” The Times, May 25, 1816, pp. 3. The Times Digital Archive.
“London, Monday, May 27, 1816.”" The Times, May 27, 1816, pp. 3. The Times Digital Archive.
“London, Thursday, May 30, 1816.” The Times, May 30, 1816, pp. 2. The Times Digital Archive.
“Monthly Agriculral Report.” The Observer, Feb 04, 1816, pp. 4, ProQuest Historical Newspapers: The Guardian and The Observer.
“Monthly Agricultural Report.” The Observer, May 05, 1816, pp. 4, ProQuest Historical Newspapers: The Guardian and The Observer.
Oppenheimer, Clive. "Climatic, Environmental and Human Consequences of the Largest Known Historic Eruption: Tambora Volcano (Indonesia) 1815." Progress in Physical Geography, vol. 27, no. 2, 2003, pp. 230-259, doi:10.1191/0309133303pp379ra.
Peacock, Alfred James. Bread or Blood: a Study of the Agrarian Riots in East Anglia in 1816.
Victor Gollancz, 1965.
Post, John D. The Last Great Subsistence Crisis in the Western World. Johns Hopkins University Press, 1977.
“Riots in Suffolk” The Times, May 21, 1816, pp. 3. The Times Digital Archive.
“Tambora.” Global Volcanism Program, Smithsonian Institution, 2013. volcano.si.edu/volcano.cfm?vn=264040
Ward, Peter L. “Sulfur Dioxide Initiates Global Climate Change in Four Ways.” Thin Solid Films, vol. 517, no. 11, 2009, pp. 3188-3203, doi:10.1016/j.tsf.2009.01.005.
“Yesterday the Princess Charlotte and her husband received congratulatory addresses from Salisbury and.” The Times, May 23, 1816, pp. 3. The Times Digital Archive.
Dr. Ruth Morgan, Monash University
Non-tabular iceberg off Elephant Island in the Southern Ocean. Source: Andrew Shiva, Wikipedia.
Ice, or a lack of it, is an “icon” of anthropogenic climate change. Earlier this year, researchers reported that a rift in Antarctica’s fourth-largest ice shelf has accelerated and could soon cause a vast iceberg to fall into the sea. After the collapse of the ice shelf, the glaciers that once sustained it will run into the sea. Glaciers like these, Mark Carey has observed, have become an “endangered species” of the Anthropocene. Yet only a few decades ago, Antarctic ice was the hero in a visionary episode of the planet’s recent “cryo-history”.
In October 1977, scientists met at Iowa State University to discuss the latest findings in the emerging field of “iceberg utilization”. Eager to promote the cause was conference co-sponsor Prince Mohammed al-Faisal of Saudi Arabia, who flew an iceberg weighing over two tonnes from the Portage Glacier Field near Anchorage, Alaska to Ames, Iowa for the occasion – producing at least 7 tonnes of carbon dioxide over the 5,000km journey. One local couple, who brought with them plastic bags, a bucket, and an ice-pick to the iceberg’s unveiling, told the New York Times, “I don’t know what we’ll do with it – serve it in drinks, I guess. We’ll have a cocktail party”.
A series of US television news features documenting the Iceberg Utilization Conference, October 1977. Source: YouTube / Special Collections and University Archives, Iowa State University.
These stunts amused onlookers, but they were no laughing matter for the researchers studying the possibility of towing Antarctic icebergs to arid and semi-arid climes. Iceberg utilization was a tantalizing prospect for solving one of the world’s pressing problems: global water shortages. In their controversial study The Limits to Growth, the interdisciplinary research group the Club of Rome had earlier warned that the availability of fresh water was a limit to growth that “will be reached long before the land limit becomes apparent”. Bolstering this neo-Malthusian prediction were the widely reported droughts in the Sahel, the Ukraine, and the failure of the Indian Monsoon during the early 1970s.
An excerpt from the public affairs program, Dimension 5, which aired on WOI-TV in central Iowa, USA, October 1977. Panellists include Prince Mohamed Al Faisal of Saudi Arabia, Henri Bader, Daniel J. Zaffarano, Richard L. Cameron, and Ed Cronick. Source: Youtube / Special Collections and University Archives, Iowa State University.)
These anxieties were the focus of the 1977 United Nations Conference on Water in Mar del Plata, Argentina, where fresh water was declared a “scarce asset” that demanded coordinated resource development and management. Among the options discussed to increase water supplies were so-called “complex technologies” and “non-conventional methods”, such as seawater desalination. By the late 1970s desalination was already well established in Kuwait, and Saudi Arabia was eager to replicate its neighbour’s success. Leading this mission (at least until Antarctic icebergs beckoned) was the head of the Saudi Saline Water Conversion Corporation: Prince Mohamed al-Faisal. He shared his vision with the Christian Science Monitor, “Over a period, we would hope to change the vegetation and climate in some coastal areas”.
The Prince’s idea was several decades in the making. The prospect of using icebergs to modify local climates and to provide endless water supplies to the world’s thirstiest regions had emerged in the decade after the Second World War. In a 1949 class at the Scripps Institution of Oceanography in California, oceanographer John Isaacs had speculated on the subject, and later expanded on his thinking in the February 1956 issue of Science Digest. He proposed floating an Antarctic iceberg along the Humboldt Current to the coast of southern California from where it could supply water to Los Angeles.
The feasibility of such a scheme had been confirmed in 1969, when glaciologist Willy Weeks and geophysicist Bill Campbell surprised even themselves when they concluded that towing icebergs to arid lands was “within the reach of existing technology”. They based their calculations on a large tabular iceberg that was twice the size of the Great Pyramid of Giza, which was less likely to roll in transit and more likely to be found near the Antarctic than the Arctic. The optimum routes for towing such an iceberg, they suggested, were from the Amery Ice Shelf to southwestern Australia and from the Ross Ice Shelf to the Atacama Desert.
“Optimum towing paths between the Amery Ice Shelf and Australia and the Ross Ice Shelf and the Atacama Desert.” Fig. 8, Weeks and Campbell, 1973, p. 220.
In 1973, the National Science Foundation and the Rand Corporation sponsored a subsequent report on the feasibility of southern California for such a scheme. Antarctic icebergs could supply water for urban, industrial and agricultural demands, while helping to abate the growing thermal pollution of the industrialized region. According to their estimates, towing an iceberg from the Ross Sea to the Pacific southwest would be significantly cheaper than inter-basin water transfers and desalination. Furthermore, nuclear energy could be used, which would alleviate the need to use fossil fuels during a decade of uncertain oil supplies.
The possibility of endless water supplies was too good to ignore and the Saudi prince assembled experts from around the world to advance the field of “iceberg utilization”. His 1977 conference in Iowa attracted scientists from arid and semi-arid countries such as Egypt, Greece and Libya, as well as nations with polar territories, such as Australia, Chile and Canada. Nearly three quarters of the attendees were from the United States, most of whom were associated with the military-industrial-academic complex. They included researchers from the Jet Propulsion Laboratory, Tetra Tech International, the Lawrence Berkeley Laboratory, the US Army Cold Regions Research and Engineering Laboratory, and the Naval Weapon Centre.
The lone woman speaking at the conference was the pioneering meteorologist, Joanne Simpson from the University of Virginia, Charlottesville. Simpson had been director of the experimental meteorology laboratory of the National Oceanic and Atmospheric Administration and member of the Weather Modification Advisory Board. Two decades of studying the intersections of cloud physics with hurricane research informed her comparison of Antarctic icebergs to cloudseeding, as well as her study of the atmospheric impacts of iceberg utilization. Although towing an iceberg would cost more than cloudseeding, she estimated that its meltwater would more than make up for the expense. In icebergs, Simpson also saw a means to mitigate the toll of tropical hurricanes. Using an iceberg to lower the surface temperature of the ocean ahead of an advancing hurricane would help to reduce the destructive winds of the hurricane.
“Illustration of possible new approach to the hurricane mitigation aspect of weather modification. Hurricanes are known to diminish in strength when they move over cooler water, here shown hypothetically to be supplied by a melting iceberg.” Source: Fig. 5, in Simpson, 1978, p. 865. Artist: Tom Henderson.
Simpson was well aware of the credibility gap that such endeavours faced. In 1978 she wrote, “For meteorology as a whole, public overheated controversy on weather modification gives the entire profession an image of ridiculous bumblers or even charlatans”. But the opportunity to “serve humanity” outweighed these concerns and she welcomed alternative modification methods.
Despite the promise of iceberg utilization, its potential impact on local climates became one of the many reasons why the vision did not become a reality. In Australia, for instance, enthusiastic plans for the continent’s southwest were rejected in the mid-1980s on the grounds that an iceberg “parked offshore for several years” might affect the regional climate in unexpected and unwanted ways. Peter Schwerdtfeger, the scheme’s Australian proponent, lamented that its feasibility lay not in science and technology, but in “politically and economically based decisions”. He remained confident, however, that iceberg utilisation would occur when “individual nations recognise their obligations to the more thirsty segment of mankind” and choose to exploit the Antarctic icebergs that otherwise “melt pointlessly in the Southern Ocean”. According to this logic, the failure to take advantage of the icebergs was tantamount to wasting precious water resources.
The possibility of iceberg utilization was one of many post-war technological visions. The futurism and science fiction of the atomic age urged the exploration and exploitation of new planetary frontiers such as the deep ocean and outer space. In the Cold War context, measuring, monitoring and manipulating the physical environment on a global scale had the potential to fulfil both military and peaceful ambitions. The iceberg “visioneers” were bit players in a wider debate about the Earth’s future, one that pitted the constraints of ecological limits against the possibilities of technological innovation. Just as the atom offered an inexhaustible source of cheap energy, Antarctica was a cornucopia of renewable fresh water simply awaiting the application of human ingenuity. Four decades later, we are searching for ways to keep that water well and truly locked up.
Al-Nakib, Farah, Kuwait Transformed: A History of Oil and Urban Life (Palo Alto, CA: Stanford University Press, 2016).
Behrman, Daniel with John D. Isaacs, John Isaacs and His Oceans (Washington, DC.: ICSU Press, 1992).
Carey, Mark, “The History of Ice: How Glaciers Became an Endangered Species,” Environmental History 12 (2007): 497-527.
Carey, Mark, M. Jackson, Alessandro Antonello and Jaclyn Rushing, “Glaciers, Gender, and Science: A Feminist Glaciology Framework for Global Environmental Change Research,” Progress in Human Geography 40, no. 6 (2016): 770-93.
Fleming, James R., Fixing the Sky: The Checkered History of Weather and Climate Control (New York: Columbia University Press, 2010).
Gosnell, Mariana, Ice: The Nature, the History, and the Uses of an Astonishing Substance (Chicago: University of Chicago Press, 2005).
Hamblin, Jacob Darwin, Arming Mother Nature: The Birth of Catastrophic Environmentalism (New York: Oxford University Press, 2013).
Harper, Kristine C., Make it Rain: State Control of the Atmosphere in Twentieth-Century America (Chicago: University of Chicago Press, 2017).
Hult, J.L. and N.C. Ostrander, Antarctic Icebergs as a Global Fresh Water Resource (Santa Monica, CA: Rand, 1973).
Husseiny, A.A. (ed.), Iceberg Utilization: Proceedings of the First International Conference and Workshops on Iceberg Utilization for Fresh Water Production, Weather Modification, and Other Applications, held at Iowa State University, Ames, Iowa, USA, October 2-6, 1977 (New York: Pergamon Press, 1978).
Jones, Toby Craig, Desert Kingdom: How Oil and Water Forged Modern Saudi Arabia (Cambridge, MA: Harvard University Press, 2010).
Leslie, Stuart W., The Cold War and American Science: The Military-Industrial-Academic Complex at MIT and Stanford (New York: Columbia University Press, 1993).
McCray, W. Patrick, The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies and a Limitless Future (Princeton: Princeton University Press, 2013).
Rozwadowski, Helen M., “Arthur C. Clarke and the Limitations of the Ocean as a Frontier,” Environmental History (2012): 1-25.
Sabin, Paul, The Bet: Paul Ehrlich, Julian Simon, and Our Gamble over Earth’s Future (New Haven, CT: Yale University Press, 2013).
Schmidt, Jeremy J., Water: Abundance, Scarcity, and Security in the Age of Humanity (New York: NYU Press, 2017).
Schwerdtfeger, Peter, “The Development of Iceberg Research and Potential Applications,” Polar Geography and Geology 9, no. 3 (1985): 202-209.
Simpson, Joanne, “What Weather Modification Needs – A Scientist’s View,” Journal of Applied Meteorology 17 (1978): 858-66.
Sörlin, Sverker, “Cryo-History,” in The New Arctic, (eds.) Birgitta Evengård, Joan Nymand Larsen and Øyvind Paasche (New York: Springer, 2015), pp. 327-39.
Weeks, Wilford J. and William J. Campbell, “Icebergs as a Freshwater Source: An Appraisal,” Journal of Glaciology 12, no. 65 (1973): 207-33.
Patrick Gage, Georgetown University
People care about climate change when it affects them. That is why Pacific islanders fear rising sea levels more than the average American, and why many who live in coastal cities fear a projected increase in tropical cyclones more than those further inland. Yet the idea that an environmental change “over there” will not affect communities “here” actually makes little sense. History is rife with examples of human crises brought on by seemingly distant climatic events.
One of the clearest examples unfolded in late nineteenth century Northeastern Brazil (Nordeste). A powerful El Niño-Southern Oscillation (ENSO) event warmed the waters of the equatorial Pacific Ocean, changing atmospheric circulation in ways that brought extreme rain shortages to Brazil, and ultimately launched the nation’s first rubber boom. The Grande Seca, or “Great Drought,” of 1877-1878 not only killed hundreds of thousands of northeasterners (nordestinos), but also sparked massive internal migration. The latter proved particularly problematic for the state of Ceará, from which thousands emigrated. Cearenses thus provided rubber barons in nearby Amazonas and Pará an invaluable supply of cheap labor, which they needed to meet growing demand. By 1900, the country exported more rubber than any other commodity except coffee. El Niño therefore shaped the history of Brazil.
ENSO events affect the global environment on an irregular basis. Typically, Peru’s cold Humboldt Current flows northward along the South American coast before easterly trade winds push it west along the equator. Warmed by the sun, its waters increase in temperature as they approach Indonesia, making the western Pacific hotter than the east. El Niño reverses these trends: trade winds and the Humboldt’s westward flow subside, westerly winds pick up, Kelvin waves carry warm water from Asia to South America in a process called “advection,” and hot, humid air masses travel toward Peru and Ecuador. Sea temperature in the eastern equatorial Pacific subsequently rises, causing changes in precipitation across the Americas. While coastal Peru faces torrential rain, Brazil’s Nordeste experiences severe drought. The distrinct relationships, or teleconnections, between ENSO and local climates generate different phenomena depending on the region. When Western Canadians enjoy an unusually warm winter, for example, Western Europeans may endure an especially cold one.
El Niño and drought in Northeastern Brazil therefore often coincide, but not always. The Brazilian Northeast has struggled with intermittent drought for centuries. Although its sugar- and cotton-heavy coast generally receives sufficient rain, the region suffered no fewer than forty-four unique dry spells between 1557 and 1992, or approximately one every ten years. Removing an abnormally wet period from 1615-1691 reduces that average to once per eight. What is more, of the fifteen so-called “major” droughts—those spanning at least two consecutive summers—only six occurred before 1800, implying a quantitative and qualitative increase over the past 200 years. While some of these dry spells occurred in concert with ENSO, many did not. Water shortages plague the Nordeste regardless of ocean temperature.
Different droughts affected the water-dependent Northeast differently. Though many were forgotten, some left indelible marks, none more than the Grande Seca. From 1877 to 1878, two “very strong” El Niño years dramatically increased water shortages and decimated the Nordeste, killing livestock and people by the tens of thousands. Ceará suffered most. As cattle and crop losses wiped out food supplies, the state’s death toll mounted. By 1878, 175,000 Cearenses had perished. All told, at least 500,000 nordestinos died and three million fled their homes. Newspapers from Ceará described the tragedy in heart-wrenching detail.
On 6 January 1877 (mid-summer), Cearense noted the first signs of hardship: “The lack of rains is already being felt. From Sobral and other … points of the province they tell us … the drought is … causing considerable damage.” Desperate letters painted a dismal picture. On 11 March, one man in Crato wrote: “We are with a terrible drought … and only God knows how painful this scourge will be.” Relayed another from Caixoçó: “The drought is ravaging everything, the mortality of cows is astonishing.”
The situation did not improve as March and the late rainy season became early winter. One correspondent from Assaré feared complete human annihilation in the surrounding countryside, while O Retirante (“The Refugee”) lamented the “emaciated bodies of our little children, wives and fathers.” A letter published several days before Christmas ended 1877 on a depressing note: “Already we are in the middle of December and not any rain! The drought with all its procession of horrors proceeds, threatening to swallow everything.”
The Grande Seca officially ended in 1878, but its effects lasted far longer. The drought crippled Northeastern sugar barons, who had watched their investments wither since the early 1800s. Cotton growers, whose business boomed during and after the American Civil War (1861-1865), likewise faced renewed headwinds, while cattle ranchers counted their losses in the hundreds of thousands of heads. The deadliest drought in Brazilian history, exacerbated by two consecutive years of exceptionally strong El Niño, therefore had a significant economic impact on the Nordeste, draining it of much-needed capital and contributing to the region’s lackluster development.
Above all, drought victims needed jobs, especially in Ceará. As an 11 March 1877 letter from Icó indicated, people often died “not because there [was] an absolute lack of foods, but because there [was] nothing with which to buy them.” Millions of desperate Cearenses therefore migrated to major population centers, hoping to find work. Among emigrants’ limited options, Brazil’s burgeoning rubber industry proved particularly appealing, both for its relatively high wages and geographical proximity.
Based in the Amazon Valley, namely the states of Amazonas and Pará, Brazilian rubber production did not begin until the late 1700s, after French explorer Charles Marie de La Condamine first watched natives use a “milky, viscous liquid” from the Hevea Braziliensis tree to make boots, toys, and bottles. Fueled by what amounted to a minor “gold rush,” exports of raw rubber and rubber products grew steadily through the early 1800s. The trade took off when Charles Goodyear discovered vulcanization in 1839, which made rubber resistant to extreme temperatures. Exports jumped from 388,260 kg in 1840 to 2,673,000 kg in 1860. Nevertheless, rubber remained largely irrelevant in Brazil until its first boom in the 1880s, when price increases and an influx of cheap labor pushed the commodity’s export share to 10 percent. That number soared to 39 percent by 1910. Brazil’s natural claim to Hevea made it the world’s largest producer for three decades
Despite remarkable success, Brazilian rubber barons faced constant labor shortages throughout the late nineteenth and early twentieth centuries. The Grande Seca thus benefitted them immensely. Starving Cearenses, whom the rubber industry “desperately needed,” cared little about working conditions as long they were paid, and so accepted jobs few others dared to take—among them tapping Hevea trees in a hot, disease-ridden rainforest.
During the Grande Seca, Ceará became a key state for labor recruiters from Amazonas and Pará. In 1916, Joseph Woodroffe, a European eyewitness, claimed immigration to the Amazon Valley consisted exclusively of Cearenses, largely in response to the drought. Weinstein, Barham and Coomes, Caviedes, and Resor also acknowledge the Grande Seca’s role in driving poor Cearenses to the jungle, where they supported plantations as cheap tappers (seringueiros). But despite catastrophic death tolls from 1877 onward, emigration did not find universal support in Ceará. On the contrary, Cearense and its editors openly opposed the state’s depopulation for economic and humanitarian reasons.
Cearense arranged the debate as follows. On 15 April 1877, an “enlightened friend” in Sobral noted: “We continue to think … one of the most useful ways of applying aid, to which the State is obligated, would be … to promote seriously the emigration of our population to more fertile and almost unpopulated regions of other provinces.” Several pages later, however, a sordid column lamented the fact that thirty refugees had recently arrived in Fortaleza, Ceará’s capital, and hoped to reach the Amazon Valley. “This idea of emigration to other provinces,” the author mused, “is of incalculable disadvantages to Ceará.” Cearense’s publishers agreed, as future editions only “supported” emigration insofar as they acknowledged opposing views and occasionally allowed independent writers to criticize their claims.
The paper solidified its stance on 18 April. Emigration to Amazonas and Pará, it argued, was “harmful … to [Ceará] … because it [ripped out] a large number of strong arms for plowing.” Over the next seven months, such fear came up time and again. In July, for example, one writer professed concern for the state’s future: “…supposing [the drought] is transitory, how will we repopulate our deserted hinterlands if we remove … by means of a broad emigration, their natural habitants?” Together, these columns typified a standard economic argument against outmigration, namely that Ceará would need people to rebuild once the Grande Seca passed, and therefore could not absorb any more losses than necessary. But this only explains some of Cearense’s hostility toward open borders.
Though principally worried about Ceará’s financial prospects, educated nordestinos also expressed sympathy for destitute workers. Cearense printed articles throughout 1877 noting that rubber jobs in Amazonas and Pará were difficult and exploitative. On 18 April, the paper published several letters from Father José Thomaz, “who painted with blackest colors the luck of the poor emigrant, who is there [in the Amazon] reduced to the hardest and cruelest captivity by the rubber tappers to whom he hires his services.” Another pundit claimed Cearenses who left for Amazonas would likely “perish in the swamps.”
As more reports of emigration made their way into Cearense, so too did overt warnings. “Our wretched brothers who have gone to [Amazonas] have suffered horrible trials,” wrote one author on 18 October. Yet faced with certain death by disease or starvation, Cearenses continued to flee. By 23 September, at least 1,552 had crossed into the Amazon Valley, followed by hundreds more before the end of the year. Most left for rubber plantations.
Cearenses migrated by the thousands to Amazonas and Pará at the same time Brazil’s first rubber boom began (early 1880s). Those dates are no coincidence. While Amazonian elites owed their success to many different factors, drought-stricken nordestinos provided the foundation. Without adequate labor, there would never have been a rubber industry, let alone a profitable one.
Late nineteenth and early twentieth century Brazilian rubber production had far-reaching environmental consequences. When Emperor Pedro II created the province of Amazonas in 1850, Manaus, its capital, comprised little more than “a small collection of mud huts.” That changed rapidly as speculators flooded the region. The Amazonian North’s population quadrupled from 250,000 in 1853 to almost one million in 1910. Manaus and its Paraense counterpart, Belém, benefitted immensely: electricity, streetcars, exquisite theaters, and large ports graced the once-barren cities. Countless new rubber trails cut through the rainforest as well, in addition to increased traffic on the river. That said, the industry’s initial emphasis on wild Hevea trees delayed mass deforestation for several decades, while industrial cattle ranching, which would have required a dramatic physical reorganization of the Amazon Valley, lacked sufficient investment.
Droughts have shaped Northeastern Brazil for centuries, yet the Grande Seca stands out. Not only was it longer and drier than most, but it also came at a time of profound demographic and economic transformation in Brazil. That increased its death toll and its consequences for the human and environmental histories of Brazil.
The past, like the present, proves Earth’s interconnectedness. Environmental shifts “over there” will eventually affect us “here.” More than one hundred years ago, warming water in the Pacific Ocean changed the course of Brazilian history, driving extraordinary investment in the previously untapped Amazon Valley. In the same way, natural disasters, rising seas levels, and other symptoms of global warming will inevitably influence how all of us live our lives, regardless of geography.
There is no running away. We must face this crisis together.
Barham, Bradford L., and Oliver T. Coomes. Prosperity’s Promise: The Amazon Rubber Boom and Distorted Economic Development. Boulder: Westview Press, 1996.
Burns, E. Bradford. A History of Brasil: Third Edition. New York: Columbia University Press, 1993.
Caviedes, César N. El Niño in History: Storming Through the Ages. Gainesville: University Press of Florida, 2001.
Cearense. Fortaleza, Ceará. 1877. Available at: http://bndigital.bn.gov.br/hemeroteca-digital.
Glantz, Michael H. Currents of change: El Niño’s impact on climate and society. Cambridge: Cambridge University Press, 1996.
Gergis, Joëlle L., and Anthony M. Fowler. “A history of ENSO events since A.D. 1525: implications for future climate change.” Climatic Change 92, nos. 3-4 (2009): 343-387.
O Retirante. Fortaleza, Ceará. 1877. Available at: http://bndigital.bn.gov.br/hemeroteca-digital.
Pedro II. Fortaleza, Ceará. 1877. Available at: http://bndigital.bn.gov.br/hemeroteca-digital.
Quinn, William H. “A study of Southern Oscillation-related climatic activity for A.D. 622-1900 incorporating Nile River flood data.” In El Niño: Historical and Paleoclimatic Aspects of the Southern Oscillation, edited by Henry F. Diaz and Vera Markgraf, 119-150. Cambridge: Cambridge University Press, 1992.
Resor, Randolph R. “Rubber in Brazil: Dominance and Collapse, 1876-1945.” The Business
History Review 51, no. 3 (1977): 341-366.
Villa, Marco Antonio. Vida e morte no sertão: História das secas no Nordeste nos séculos XIX e XX. Editora Ática: São Paulo, 2000.
Weinstein, Barbara. The Amazon Rubber Boom: 1850-1920. Stanford: Stanford University Press, 1983.
Woodroffe, Joseph F. The Rubber Industry of the Amazon and How Its Supremacy Can Be Maintained. Edited by Harold Hamel Smith. London: T. Fisher Unwin and Bale, Sons and Danielsson, 1916. Available at: https://archive.org/details/rubberindustryof00woodrich.
Dr. Bathsheba Demuth, Brown University
Most students at Brown University know Professor Kathleen Hess from the two-semester challenge of organic chemistry. But in a class that debuted this fall, “Exploration of the Chemistry of Renewable Energy,” Dr. Hess blended the tools of her discipline with questions of human impacts on the climate, renewable energy technologies, and the social impact of how energy is generated and used. The result is a socially-engaged course blending social science and bench science. “I thought this would be a perfect way to teach students who were not science majors,” Hess explains. “That was my goal.”
Courses on climate or energy history, renewable energy, and the relationship between climate and society are now taught at universities and colleges across the country. Most are designed by faculty in humanities, earth science, or engineering departments. Hess’s class offers a new model. Inspired by the Chemistry Collaborations, Workshops, and Communities of Scholars (cCWCS) pedagogy seminars, Hess's syllabus combines interdisciplinary readings, guest lectures, writing assignments - and laboratory experiments. “I wanted to give students both background on the topic,” she says, “and then give them the hands-on experiment so they would have practical experience.”
The course began by examining why renewable energy sources are increasingly important. Students read about fossil fuel pollution, climate change, and energy politics. They also did lab experiments to calculate how much energy is required to light a classroom. Then the syllabus moved on to examine batteries, fuel cells and solar panels. Hess framed each topic around a question. “Scientists should always be enquiring rather than saying we’re just going to the lab to make such and such,” Hess says. In one case, the class spent several weeks researching sources and uses of biofuel energy. Then students went to the lab to make fuel out of food waste from the Brown dining halls. “The students were really excited about this,” Hess notes. But when the class compared the energy yield to other fuels, “there was a lot of ‘oh, this is why we don’t do this,’” Hess says. “It was more of an illustration than just looking at another graph, because they saw and understood the processes involved.”
In another case, students produced acid rain in a petri dish. Unlike history or policy classes, where acid rain is a topic – or most chemistry classes, where experiments are done in solution - Hess’s students saw “how concrete and bridges erode, and saw how materials travel through the air.” Students designed experiments to measure individual carbon emissions. In another experiment the class made their own hydrogen fuel cells. It required working with hydrochloric acid. Hess says hands-on exercises like this generated a great deal of student enthusiasm – not just energy between fuel cells – but were also complex and delicate. This was sometimes a challenge for students not used to the lab sciences. “Sometimes just getting ready to do the labs,” Hess says, “took some time and explaining. Sometimes they didn’t know how to start. So there could be a bit of inertia there.” Overall, however, Hess found “the level of student interest was really high. At the end of the course of the students told me that none of the lab assignments felt like homework, because they were so enjoyable.”
Across case studies, Hess linked the experiments back to social, political, and economic questions. Hess says her class arrived with “quite a few preconceived notions about why people believe in global warming or not, why they’re interested in renewable energy or not.” Through readings and lectures that covered climate change, the development of the current energy grid, the history of the electric car, the use of solar panel systems, and how humans have used different energy sources in the past, students started thinking about “how none of them have ever lived without power – without a light switch to turn on.” Student read about everything from global energy transitions to oil company correspondence about fossil fuel development. “I wanted them to see that we can always judge why people use the resources they do,” Hess explains, “but there are multiple sides to the story.”
Seeing these multiple sides helped students understand how the physical principles and technologies they were learning about in the lab “was one thing, but how to incorporate it into society is another,” Hess says. She had each student choose a renewable technology – from algal biofuels to concentrated solar – and design a brochure to convince consumers to use a new source of energy. Students also presented the results of their alternative energy research to the class. For Hess, this was the most inspiring part of the course. As each student learned to combine their technical knowledge from labs with their research on specific fuels, she says “they felt that was encouraging because they had to come up with an alternative energy to talk about, and knew collectively about all these different options.”
While thinking about climate change and the future is often discouraging and leaves individuals unsure how to respond, Hess found this course affirmed her sense that “education is the first step away from not knowing what to do. Especially mindful education where we don’t just judge things, but examine the combination of physical processes and assumptions that make them happen.” The best approaches to teaching climate change often combine perspectives from many disciplines, from the sciences to the humanities.
Dr. Dagomar Degroot, Georgetown University
The world is warming, and it is warming fast. According to satellites and weather stations, Earth's average annual temperature will smash the instrumental record this year, likely by around 0.1° C. Last year, global temperatures broke the record by around the same amount. That may not seem impressive, but consider this: temperatures have climbed by about 0.1° C per decade since the 1980s. In just two years, therefore, our planet catapulted two decades into a hotter future.
Global climate change on this scale, with this speed, is unprecedented in the history of human civilization. Yet that history has still coincided with other, smaller but still impressive changes in Earth's climate. Humans may have played a minor role in some of these changes. The key culprits, however, were often violent explosions on Earth that coincided with periods of unusual solar activity. The most dramatic climate changes usually involved global cooling, not warming. The consequences for communities and societies around the world could be profound, in ways that offer lessons for our fate in a changing climate.
One of the coldest periods in the history of human civilization started in the early sixth century CE. Growth rings imprinted in the bark of trees suddenly narrow around 536 CE, and again around 541 CE. This narrowing reveals that trees practically stopped growing as Northern Hemisphere temperatures plunged by as many as 3°C, relative to long-term averages.
Other scientific "proxy" sources that responded to past climate changes reveal the same trend. A large team of interdisciplinary scholars, led by Ulf Büntgen, recently concluded that 536 CE was the first year of a "Late Antique Little Ice Age" - not to be confused with the better-known Little Ice Age of the early modern period - that chilled the Northern Hemisphere and perhaps the globe until 660 CE.
What could have caused this cooling? Cosmogenic isotopes tell us that solar activity had been falling for more than a century, as the sun gradually entered a "grand solar minimum." But that does not explain why Earth's climate changed so profoundly, and so abruptly, in the early sixth century CE.
Scientists now believe that ice cores containing traces of volcanic ash provide compelling evidence for a remarkable series of major eruptions, in 536, 540, and 547 CE. Big volcanic eruptions in the tropics can cool the Earth by releasing sunlight-scattering sulphur into the atmosphere. Trade winds swirling up from the equator bring this sulphur into both hemispheres, which ultimately creates a global volcanic dust veil. When eruptions happen in quick succession, Arctic sea ice can expand dramatically. Since bright sea ice reflects more sunlight than water, the Earth cools in response, which of course leads to more sea ice, more cooling, and so on.
Catastrophic volcanic eruptions, coinciding as they did with a prolonged decline in solar activity, may well have released enough aerosols into the atmosphere to usher in a much cooler climate. Yet sixth-century layers in Greenlandic ice cores may also suggest a very different, and even more exotic, culprit for climatic cooling.
Somehow, microscopic marine organisms of a kind normally found near tropical coasts ended up in ice layers that correspond to 536 and 538 CE. Layers dating from 533 CE also hold nickle and tin, substances that rarely appear in Greenlandic ice. Both metals are common in comets, however.
A team of scientists led by Dallas Abott recently concluded that dust from the tail of Halley's Comet may have started cooling the Earth as early as 533 CE. By reconstructing the past orbits of the comet, scientists discovered that it made a particularly close pass around the Sun in 530 CE. At around that time, Chinese astronomers recorded a remarkably bright comet in the night sky.
Earth regularly passes through debris left in the wake of Halley's Comet, and that debris might have been especially dense in the 530s and 540s. Meteor showers, therefore, may well have left cooling dust in the atmosphere, and metals in the ices of Greenland.
Tidal forces created by the gravity of a massive object - such as the Sun - can easily fragment cometary nuclei, most of which are collections of rubble left over from the primordial solar system. Dust released by such a breakup can dramatically brighten a comet. Perhaps that is what Chinese scientists witnessed in 530 CE, as Halley's Comet swung around the Sun.
According to Abbott and her coauthors, a piece of the comet may then have collided with Earth, launching sea creatures high into the atmosphere. Melted metal and gravity anomalies in the Gulf of Carpentaria off Australia suggest that an impact happened there sometime in the first millennium CE. At around the same time, aboriginal Australians etched symbols into caves that may well have represented comets.
It may well be that an extraordinary confluence of extraterrestrial impacts and volcanic eruptions, coinciding with a gradual fall in solar activity, chilled the Earth in the 530s and 540s CE. These dramatic environmental changes naturally astonished contemporary writers. In 536 CE, Procopius of Caesarea, a major scholar of the Eastern Roman Empire, wrote that the “sun gave forth its light without brightness, like the moon.” According to John of Ephesos, “there was a sign in the sun the like of which had never been seen and reported before in the world . . . The sun became dark and its darkness lasted for one and a half years."
A Syrian chronicler recorded that "The earth and all that is upon it quaked; and the sun began to be darkened by day and the moon by night." Chinese astronomers lost sight of Canopus, one of the brightest stars in the night sky. If there was a dust veil, it may well have been thick enough to obscure the heavens, whatever its origins.
Cassiodorus, a Roman statesman in the service of the Ostrogoths, wrote perhaps the most striking descriptions of the changes in Earth's atmosphere. "Something coming at us from the stars," he explained, had led to a "blue colored sun," a dim full moon, and a "summer without heat." Amid "perpetual frosts" and "unnatural drought," plants refused to grow and "the rays of the stars have been darkened." The cause, to Cassiodorus, must be high in the atmosphere, for "things in mid-space dominate our sight," and the "heat of the heavenly bodies" could not penetrate what seemed like mist.
Of course, we must guard against the assumption that observers such as Cassiodorus or Procopius simply recorded what they saw in the natural world. Descriptions of environmental calamities in ancient, medieval, and even early modern texts can be allegorical, representing social, not environmental developments. Still, many authors wrote eerily similar accounts of the real environmental upheavals in the 530s CE. To the modern eye, that of Cassiodorus in particular may seem to add evidence for a cometary cause of contemporary cooling.
As temperatures plummeted and plants withered, communities around the world suffered. Scientists have examined pollen deposits that reveal sharp drops in the extent of cultivated land across Europe. Shorter growing seasons probably led to food shortages and famines that emptied once-thriving villages. Archaeological evidence suggests, for example, that Swedes abandoned most of their population centers in the sixth century, which were then swallowed by forests. Swedish survivors apparently created new towns in far smaller numbers, in upland areas removed from their former dwelling places.
Famines may have had particularly severe consequences across the densely populated Mediterranean. In 533 CE, just as cometary dust may have started entering Earth's atmosphere, the emperor of the Eastern Roman Empire, Justinian I, embarked on a costly campaign to restore the Western Empire. His subsequent wars in the Mediterranean, combined with a war against the Sassanid Empire that erupted in 540 CE, drew precious resources from the imperial countryside. As growing seasons declined, the demands of war compounded food shortages for millions of imperial citizens. Starvation spread through the empire, but worse was to come.
Malnutrition reduces fat-storing cells that produce the hormone leptin, which plays a key role in controlling the strength of the human immune system. In the sixth century, food shortages therefore weakened immune systems on a grand scale, leaving millions of people more vulnerable to disease. Those who survived famines also migrated to new towns or cities, increasing the likelihood that those infected with diseases would spread them.
Unfortunately for the inhabitants of what was left of the Roman Empire, Yersinia pestis, the pathogen behind the bubonic plague, was about to make its first appearance in Europe. From 541 to 542 CE, the “Plague of Justinian” swept through both the Western and Eastern halves of the Roman Empire, killing as many as fifty million people. In a warmer, more stable climate, the death toll may well have been far lower.
Not surprisingly, Justinian's campaign to retake the Western Empire stalled after the early 530s CE, although the reunified Roman Empire did reach its maximum extent in the 550s CE. Imperial resources were stretched thin, however, and European kingdoms reversed most of the new conquests soon after Justinian's death.
Climatic cooling probably had cultural consequences, too. There are signs, for example, that religious activity surged across Scandinavia as temperatures plunged. In times of crisis, devout Scandinavians offered gold to their gods in a way we might find counterinuitive: by burying it. Dating these underground hoardes is tricky, but it seems that Scandinavians buried most of them in the sixth century CE. These burials contributed to a gold shortage in Scandinavia that would endure for centuries.
The great oral traditions of Norse mythological poetry also date from the sixth century. Most people have heard of Ragnarök: the "twilight of the gods" that ends with the Earth incinerated and reborn. Fewer have come across the concept of Fimbulvetr, the "mighty winter" that heralds the final battle of the gods.
The Prose Edda, a thirteenth-century transcription of Norse mythology, describes Fimbulvetr in vivid detail. “Then snow will drift from all directions," the Edda predicts. "There will then be great frosts and keen winds. The sun will do no good. There will be three of these winters together and no summer between.” According to the Poetic Edda, a collection of poems also committed to writing in the thirteenth century, “The sun turns black . . . The bright stars vanish from the sky.”
These precise descriptions of an apocalyptic winter have no parallel in other religious texts or mythical traditions. Instead, they echo the sixth-century reports of Cassiodorus, Procopius, and other astonished observers of real environmental transformations. Scandinavians fleeing their homes amid catastrophic cooling may well have felt like they were living through a preview of the apocalypse.
The trauma caused by sixth-century environmental changes may therefore be imprinted on Norse mythology. Ideas of a new world in the wake of Ragnarök may also reflect the consequences of real events, such as the new settlements and cultures that emerged amid climatic cooling.
Can these ancient calamities offer any lessons for our warmer future? Perhaps. They suggest, for example, that complex, densely populated societies, far from being insulated from the effects of climate change, may actually be most at risk. When populations brush up against the carrying capacity of agricultural land, sudden environmental shifts can be catastrophic. In these situations, societies already embroiled in resource-draining wars could be particularly vulnerable. The consequences of sixth-century cooling hint, also, that responses to even short-lived climatic upheavals can profoundly alter cultures in ways that endure for centuries, or even millennia.
Ancient societies, of course, have little similarity to our own. Yet their struggles in periods of dramatic climate change may still shed some light on our prospects in a warming world. To understand the future, we would be well served to look back at the distant past.
Abbott, Dallas H., Dee Breger, Pierre E. Biscaye, John A. Barron, Robert A. Juhl, and Patrick McCafferty. "What caused terrestrial dust loading and climate downturns between AD 533 and 540?." Geological Society of America Special Papers 505 (2014): 421-438.
Arjava, Antti. "The mystery cloud of 536 CE in the Mediterranean sources." Dumbarton Oaks Papers 59 (2005): 73-94.
Axboe, Martin. "The year 536 and the Scandinavian gold hoards." Medieval Archaeology 43 (1999).
Gräslund, Bo, and Neil Price. "Twilight of the gods? The ‘dust veil event’ of AD 536 in critical perspective." Antiquity 86:332 (2012): 428-443.
Hamacher, Duane W. "Comet and meteorite traditions of Aboriginal Australians." Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures (2014): 1-4.
Widgren, Mats. "Climate and causation in the Swedish Iron Age: learning from the present to understand the past." Geografisk Tidsskrift-Danish Journal of Geography 112:2 (2012): 126-134.
Dr. Tim Newfield, Princeton University, and Dr. Inga Labuhn, Lund University.
Carolingian mass grave, Entrains-sur-Nohain, INRAP.
Will climate change trigger widespread food shortages and result in huge excess mortality in our future? Many historians have argued that it has before. Anomalous weather, abrupt climate change, and extreme dearth often work together in articles and books on early medieval demography, economy and environment. Few historians of early medieval Europe would now doubt that severe winters, droughts and other weather extremes led to harvest failures and, through those failures, food shortages and mortality events.
Most remaining doubters adhere to the idea that food shortages had causes internal to medieval societies. Instead of extreme weather or abrupt climate change, they blame accidents of (population) growth, deficient agrarian technology, unequal socioeconomic relations and weak institutions. Yet only rarely they have stolen the show or dominated the scholarship. For example, Amartya Sen’s “entitlement approach” to subsistence crises, which assigns primary importance to internal processes, has made few inroads in the literature on early medieval dearth, although in later periods it has many adherents.
Of course, the idea that big events have a single cause – monocausality, in other words – rarely convinces historians for long. Famine theorists and historians of other eras and world regions now argue that neither external forces such as weather, nor internal forces such as entitlements, alone capture the complexity of food shortages. They propose that these two explanatory mechanisms, often labeled “exogenous” and “endogenous,” respectively, should not be considered independent of one another or mutually exclusive. To them, periods of dearth can be explained by environmental anomalies, like unusual and severe plant-damaging weather, that coincide with socioeconomic vulnerability and declining (for most people) entitlement to food.
These explanations are more convincing. It seems that diverse factors acted in concert to cause, prolong and worsen food shortages. But proof for complex explanations for dearth in the distant past is hard to come by. Though they can be misleading, simpler, linear explanations are much easier to pull out of the extant evidence. This is true even when the sources are plentiful, as they are, at least by early medieval standards, for some regions and decades of Carolingian Europe. Food shortages in the Carolingian period, especially those that occurred during the reign of Charlemagne, have attracted the attention of scholars since the 1960s.
Left: Bronze equestrian statuette of Charlemagne or possibly his grandson Charles the Bald (823-877). Discovered in Saint-Étienne de Metz and now in the Louvre. The figure is ninth century in date. The horse might be earlier and Byzantine. Charles the Bald ruled the western portion of the post-Verdun empire, although whether he was actually bald is still debated.
Right: A Carolingian denarius (812-814) depicting Charlemagne. The Charlemagne of the Charlemagne reliquary mask (Center) is handsomer. The coin, though, is contemporary and the bust is from the mid fourteenth century. Housed in the Aachener Dom’s treasury, it contains a skullcap thought to be that of the emperor.
For the Carolingian period, ordinances from the royal court, capitularies, reveal hoarding and speculation, and document official attempts to control the prices and movements of grain, while annalists and hagiographers recount severe winters and droughts. All of this evidence sheds light on dearth. Yet the legislative acts point to internal pressures on food supply, while the narrative sources highlight external ones. As we have seen, neither pressure adequately explains subsistence crises alone.
Unfortunately, however, we rarely have evidence for endogenous and exogenous factors at the same time. Around the year 800, when Leo III crowned Charlemagne imperator, most evidence for dearth comes from the capitularies. Before and after, narrative evidence dominates. So Charlemagne’s food shortages appear to have had internal drivers, and Charles the Bald’s external ones. Or so the written sources lead us to believe.
Carolingian Europe as of August 843 following the Treaty of Verdun. Under rex and imperator Charlemagne (742-814), Carolingian territory stretched to include the area of Europe outlined here.
Fortunately, evidence from other disciplines allows historians to fill in some of the gaps. External pressures are easier to establish by turning to the palaeoclimatic sciences. Using them, we are beginning to rewrite the history of continental European dearth, weather and climate from 750 to 950 CE. We are working on a new study that combines a near-exhaustive assessment of Carolingian written evidence for subsistence crises and weather with scientific evidence for changes in average temperature, precipitation, and volcanic activity (which can influence climate).
We are trying to answer some big questions, such as: What role did droughts, hard winters and extended periods of heavy rainfall have in sparking, prolonging or worsening Carolingian food shortages? Were these external forces the classic triggers of dearth that many early medievalists think they were?
Indicators of past climate embedded in trees and ice can test and corroborate observations of anomalous temperature and precipitation. For instance, the droughts of 794 and 874 CE, documented respectively in the Annales Mosellani and Annales Bertiniani, show up in the tree ring-based Old World Drought Atlas (OWDA, see below). Additionally, as McCormick, Dutton and Mayewski demonstrated, multiple severe Carolingian winters also align fairly neatly with atmosphere-clouding Northern Hemisphere volcanism reconstructed using the GISP2 Greenlandic ice core.
The Old World Drought Atlas (OWDA) for 794 and 874. Negative values indicate dry conditions, positive values indicate wet conditions (from Cook et al. 2015).
By marrying written and natural archives, we are able to perfect our appreciation of the scale and extent of the weather extremes that coincide with Carolingian periods of dearth. Yet instead of simply providing answers, our integrated data are raising questions, and pushing us towards a messier history of early medieval food shortage. This is because the independent lines of evidence often do not agree. For example, only two of the 15 driest years between 750 and 950 CE in the OWDA coincide with drought in Carolingian sources.
Admittedly, some of this dissonance may be artificial. The written record for weather and dearth is incomplete. To be sure, some places and times during the Carolingian era, broadly defined as it is here, are poorly documented. So reported drought years can appear kind of wet in the tree-based OWDA in some Carolingian regions (parts of northern Italy and Provence in 794 and 874 for instance).
Moreover, the detailed or “high-resolution” palaeoclimatology available now for early medieval Europe is much better for some regions than others. Tree-ring series extending back to 750 presently exist for few European regions. It is simply not possible to precisely pair some reported weather extremes or dearths to palaeoclimate reconstructions. Indeed, spatially the two lines of evidence can be mismatched. They can also be seasonally inconsistent, as the trees tell us far less about temperature and precipitation in the winter than they do for the summer.
Matches between historical and scientific evidence are therefore generally limited to the growing seasons, in places where written sources and palaeoclimate data overlap. That is enough to yield some surprising results. When the written record is densest, there is natural evidence for severe weather and rapid climate change, but not for food shortages.
Take the dramatic drop in average temperatures registered in European trees at the opening of the ninth century. According to the 2013 PAGES 2K Network European temperature reconstruction, temperatures were cooler around the time of Charlemagne’s coronation than they had been at any time between the mid sixth and early eleventh centuries. This dramatic cooling aligns well with a relatively small Northern Hemisphere volcanic eruption, detected in the recent ice-core record of volcanism led by Sigl. The eruption would have ejected sunlight-scattering sulfur aerosols into the atmosphere. Notably, larger events in the Carolingian era, like those of 750, 817 and 822, clearly had less of an influence on European temperature. The cold of 800 is equally pronounced but less unusual in a tree-based temperature reconstruction from the Alps. In this series, the late 820s are remarkably cooler.
Documentary sources register the falling temperatures. The Carolingian Annales regni francorum report severe growing-season frosts (aspera pruina) in 800. The Irish Annals of Ulster document a difficult and mortal winter in an entry quite possibly misdated in the Hennessy edition at 798 (799 or the 799/800 winter is more likely). Yet surprisingly, there is no contemporary record of food shortages in Europe.
Top: European Temperature Reconstruction, 0-2000 CE (data from Pages 2K Consortium, 2013).
Bottom: Middle Red: PAGES 2K 2013 Consortium European temperatures; Middle Burgundy: Büntgen et al 2011 Alpine temperature reconstruction; Top: Sigl et al 2015 ice-core record of Global Volcanic Forcing (GVF); Bottom: Written evidence for food shortages, both famines (F) and lesser shortages (LS). ‘W’ indicates no evidence for dearth but evidence for extreme weather. Between 750 and 950 we have identified 23 food shortages: 12 spatially and temporally circumscribed lesser shortages and 11 large multi-year famines.
Scholars tend to focus on instances when the written evidence for dearth and the natural evidence for anomalous weather align tidily. It seems that just as often, however, the two lines of evidence do not match so neatly. Severe weather may not always have triggered dearth in the early Middle Ages. Contemporary peoples could apparently cope with weather extremes in ways that allowed them to escape food shortages.
Early medieval vulnerability to external forces of dearth seems to have varied over space and time. We need to investigate the contrasting abilities of peoples from different early medieval regions and subperiods, participating in distinct agricultural economies with their own agrarian technologies, to withstand plant-damaging environmental extremes.
Several studies already suggest early medievals were capable of responding to gradual climate change. But to argue that they were not rigid or helpless when faced with marked seasonal temperature or precipitation anomalies, we must first identify, from sparse sources, potential moments of resilience. In this we run the risk of reading too much into absences of evidence. Yet the conclusion seems inescapable: when written sources are relatively abundant and there is no record of dearth during notable deviations in temperature and precipitation, early medievals must have adapted successfully.
Going forward, we must identify both moments and mechanisms of early medieval resilience in the face of climate change. Teasing these out from diverse sources might be tough going, but these elements are missing from the history of early medieval dearth and climate. Their omission has allowed for misleadingly neat histories of climate change and disaster in the period. Similar problems might well plague other histories that too clearly link climate changes to food shortages and mortality crises. Research that complicates these links could offer compelling new insights about our warmer future.
Authors' note: this is a short sampling of a much longer and more detailed multidisciplinary examination of Carolingian dearth, weather and climate, currently in preparation.
P. Bonnassie, “Consommation d’aliments immondes et cannibalisme de survie dans l’Occident du Haut Moyen Âge” Annales: Économies, Sociétés, Civilisations 44 (1989), pp. 1035-1056.
U. Büntgen et al, “2,500 Years of European Climate Variability and Human Susceptibility” Science 331 (2011), pp. 578-582.
U. Büntgen and W. Tegel, “European Tree-Ring Data and the Medieval Climate Anomaly” PAGES News 19 (2011), pp. 14-15.
F. Cheyette, “The Disappearance of the Ancient Landscape and the Climatic Anomaly of the Early Middle Ages: A Question to be Pursued” Early Medieval Europe 16 (2008), pp. 127-165.
E. Cook et al, “Old World Megadroughts and Pluvials during the Common Era” Science Advances 1 (2015), e1500561.
S. Devereux, Theories of Famine (Harvester Wheatsheaf, 1993).
R. Doehaerd, Le Haut Moyen Âge occidental: Economies et sociétés (Nouvelle Clio, 1971).
P.E. Dutton, “Charlemagne’s Mustache” and “Thunder and Hail over the Carolingian Countryside” in his Charlemagne’s Mustache and Other Cultural Clusters of a Dark Age (Palgrave, 2004), pp. 3-42, 169-188.
M. McCormick, P.E. Dutton and P. Mayewski, “Volcanoes and the Climate Forcing of Carolingian Europe, A.D. 750-950” Speculum 82 (2007), pp. 865-895.
T. Newfield, “The Contours, Frequency and Causation of Subsistence Crises in Carolingian Europe (750-950)” in P. Benito i Monclús ed., Crisis alimentarias en la edad media: Modelos, explicaciones y representaciones (Editorial Milenio, 2013), pp. 117-172.
PAGES 2k Network, “Continental-Scale Temperature Variability during the Past Two Millennia” Nature Geoscience 6 (2013), pp. 339-346.
K. Pearson, “Nutrition and the Early Medieval Diet” Speculum 72 (1997), pp. 1-32.
A. Sen, Poverty and Entitlements: An Essay on Entitlement and Deprivation (Oxford University Press, 1981).
M. Sigl et al, “Timing and Climate Forcing of Volcanic Eruptions for the Past 2,500 Years” Nature 523 (2015), pp. 543-549.
P. Slavin, “Climate and Famines: A Historical Reassessment” WIREs Climate Change 7 (2016), pp. 433-447.
A. Verhulst, “Karolingische Agrarpolitik: Das Capitulare de Villis und die hungersnöte von 792/793 und 805/806” Zeitschrift fur Agrargeschichte und Agrarsoziologie 13 (1965), pp. 175-189.
Dr. Bathsheba Demuth, Brown University.
The Greenlandic coast. Source: TheBrockenInaGlory, Wikimedia Commons, 2005, commons.wikimedia.org/wiki/File:Greenland_coast.JPG
In the year 1001 CE, Leif Erikson made landfall in Greenland, and traded with people who “in their purchases preferred red cloth; in exchange they had furs to give.” The Vikings called these people Skraelings. Present-day archeologists and historians call them the Thule. At its height, Thule civilization spread from its origins along the Bering Strait across the Canadian Arctic and into to Greenland. The ancestors of today’s Inuit and Inupiat, the Thule accomplished what Erikson and subsequent generations of Europeans never managed: living in the high Arctic without supplies of food, technology, and fuel from more temperate climates.
The Thule left archeological evidence of a technologically sophisticated, vigorous people. They invented the umiak, an open walrus-hide boat so large that it was sometimes equipped with a sail. These boats, when used alongside small, nimble kayaks, made the Thule formidable marine-mammal hunters. On land, they harnessed dogs to sleds and built homes half-underground, insulated by earth and beamed with whale bones.
People did inhabit the high North American Arctic before the Thule. Their immediate predecessors, called the Dorset by archeologists, were expert carvers, and there are signs of other cultures that date back at least five thousand years. But the Thule appear to have been a particularly robust society, one that inhabited thousands of challenging Arctic miles. Eventually, they even traded with Europeans for metal tools, sending walrus ivory as far abroad as Venice.
Thule migration routes from the Bering Strait east. Map credit: anthropology.uwaterloo.ca/ArcticArchStuff
In the twentieth century, many archeologists linked the success of the Thule to the climate. In this view, rapid Thule expansion coincided with the Medieval Warm Period in the years between 1000 and 1300. The Thule were expert whalers, especially of bowhead whales. This slow species makes for good prey. Their 100-ton bodies can be fifty percent fat by volume, giving people ample calories to eat and burn through long winters. With the slight increase in temperature during the Medieval Warm Period, the theory went, the range of the bowhead whale expanded across newly ice-free waters. Atlantic and Pacific bowhead populations eventually met in the Arctic Ocean north of Canada, offering an uninterrupted banquet of blubber to hunters.
The Thule, in this view, were simply whale hunters who followed the migration of their prey in a warming climate. Environmental conditions, not a sophisticated culture, was the key explanation for their success. Emphasizing climate as the cause of migration and social success reduced the achievements of the Thule, essentially, to those of their prey.
However, twenty-first century evidence is changing this account of Thule migration. In 2000, Robert McGhee questioned the validity of the radiocarbon dates that helped establish Thule expansion as an eleventh-century phenomenon. He proposed the 1200s as the earliest date of migration. Then, genetic tests by marine biologists showed that Atlantic and Pacific bowhead whales did not mix their populations during the Medieval Warm Period, meaning that there was a substantial gap in whaling possibilities on the Arctic coast.
Something more complicated than just following the blubber drove the Thule eastward. McGhee speculated that communities moved for iron, which is short supply in the Arctic. Thule hunters learned from the Dorset people of a deposit left by the Cape York meteorite. They colonized huge territories to secure their access to this precious resource from outer space. Other specialists theorized that population pressure, overhunting, or warfare led the Thule to migrate east.
Thule archeological site, with whalebone beams among flooring stones. Photo credit: anthropology.uwaterloo.ca/ArcticArchStuf
The ongoing work of Canadian archeologists T. Max Friesen and Charles D. Arnold seems to confirm that we must look beyond simple climatic explanations for the Thule expansion. Working on Beaufort Sea and Amundsen Gulf sites, the pair established that there was no definitive Thule occupation in this part of the western Arctic prior to the thirteenth century. Because any Thule migrants would have had to pass through these points as they moved east, their research indicates that the Thule civilization was only beginning its continental spread around the year 1200, well into the period of warming. The climate may have helped the Thule quickly spread toward Greenland, but the onset of the Medieval Warm Period did not automatically draw people eastward.
Moreover, the work of other archeologists on the Melville Peninsula, along Baffin Bay, indicates that the Mediaeval Warm Period was not always so warm. Some areas of the Arctic saw slight temperature increases, but in general the millennium was cooler than those past. In places, the effects of the so-called Little Ice Age began a century or two before they were evident across the globe, meaning the Thule adapted not to a warmer Arctic, but a colder one. This cooling was more apparent in the west, where the team found fewer Thule sites but also more stability, both in the climate and the record of human occupation. To the east of the Melville Peninsula, where temperatures did warm, the climate was also more variable – adding a new set of complexities to social and economic life. The move into the central Arctic, therefore, reflected forces other than climate.
Beginning in the fifteenth century, Thule culture fragmented, specialized, and emerged eventually as distinct contemporary Inuit and Inupiat groups. The Little Ice Age is often the reason given for the disintegration of Thule civilization in the fifteenth century. Yet, the work by Finkelstein, Ross, and Adams indicates that, while the Thule abandoned some sites due to cooling trends, this did not hold in all cases. Other causes, including increased contact with Europeans and their infectious diseases, might have had more to do with the disintegration in some locations.
Overall, the new vision of the Thule prominence in the Arctic makes their rise shorter, but even more impressive. And if the Thule began their migration only in 1200, it seems unlikely they spread east simply to find iron. This would have required only smaller-scale movements to precise locations. Instead, the Thule developed a thriving, intricate network of settlements across the Arctic. For Friesen and Arnold, this is evidence that the Thule expanded in order to recreate the ideological and economic lives that they had enjoyed in their origins along the Bering Strait. And in just a century they did, not only by inhabiting land from the Bering Strait to Greenland, but through explorations to the northern edges of the continent.
All of this also helps us reinterpret a well-known tale from the Viking exploration of the Arctic. When Leif Erikson’s sister Freydis frightened off a band of Skraelingar in the early eleventh century by striking “her breast with the naked sword” of a fallen Viking, she was likely not fighting the Thule, as scholars have assumed. Perhaps it was the Dorset people that “were frightened, and rushed off in their boats.” The Thule, at least, were likely still a century away from the eastern Canadian coastline. They were not easily daunted either by a shifting climate or by Viking weapons.
Quotes from the Saga of Erik the Red, English translation by J. Sephton, can be found here: http://www.sagadb.org/eiriks_saga_rauda.en
Friesen, T. Max and Charles D. Arnold. “The Timing of the Thule Migration: New Dates from the Western Canadian Arctic,” American Antiquity 73 (2008): 527-538.
Finkelstein, S.A., J.M Ross, and J.K Adams. “Spatiotemporal Variability in Arctic Climates of the Past Millennium: Implications for the Study of Thule Culture on Melville Peninsula, Nunavut,” Arctic Antarctic, and Apline Research 41 (200): 442-454.
McGhee, Robert. “Radio Carbon Dating and the Timing of the Thule Migration,” in Appelit, M. Berglund, J, and Gullov, H.C. eds. Identities and Cultural Contacts in The Arctic: Proceedings from a Conference at the Danish National Museum. Copenhagen (2000): 181-191.
Morrison, David. “The Earliest Thule Migration.” Canadian Journal of Archaeology 22( 1999): 139-156.
Betts, Matthew, and T. Max Friesen, “Quantifying Hunter-Gatherer Intensification: A Zooarchaeological Case Study form Arctic Canada,” Journal of Anthropological Archaeology 23 (2004): 357-384.
Dyke, Arthur S., James Hooper, and James M. Savelle. “A History of Sea Ice in the Canadian Arctic Archipelago based on Postglacial Remains of the Bowhead Whale (Balaena mysticetus)”, Arctic 49 (1996): 235-255.
Park, Robert W. “The Dorset-Thule Succession Revisited,” in Appelit, M. Berglund, J, and Gullov, H.C. eds. Identities and Cultural Contacts in the Arctic: Proceedings from a Conference at the Danish National Museum. Copenhagen (2000): 192-205.