Free to follow every thread. No paywall, no dead ends.
Global catastrophic risk | HearLore
— Ch. 1 · Defining Catastrophic Scope —
Global catastrophic risk.
~8 min read · Ch. 1 of 7
An asteroid caused the extinction of the non-avian dinosaurs. This event serves as a historical anchor for understanding global catastrophic risks, which are hypothetical events capable of damaging human well-being on a global scale. The term lacks a sharp definition but generally refers to risks that could inflict serious damage to human well-being worldwide. Humanity has suffered large catastrophes before, such as the Black Death, which resulted in the deaths of a third of Europe's population and 10% of the global population at the time. Some disasters were global but not as severe, like the 1918 influenza pandemic, which killed an estimated 3, 6% of the world's population. Most global catastrophic risks would not be so intense as to kill the majority of life on earth. Even if one did occur, the ecosystem and humanity would eventually recover. In contrast, existential risks threaten the destruction of humanity's long-term potential or cause outright human extinction. Richard Posner singles out events that bring about utter overthrow or ruin on a global rather than local or regional scale. These events are worthy of special attention because they could directly or indirectly jeopardize the survival of the human race as a whole.
Natural And Human Hazards
Potential sources of risk conventionally classify into anthropogenic or non-anthropogenic hazards. Non-anthropogenic examples include asteroid or comet impact events, supervolcanic eruptions, natural pandemics, lethal gamma-ray bursts, and geomagnetic storms from coronal mass ejections destroying electronic equipment. Natural long-term climate change and hostile extraterrestrial life also fall into this category. The Sun transforming into a red giant star and engulfing the Earth billions of years in the future represents another natural threat. Anthropogenic risks are those caused by humans and include technology, governance, and climate change issues. Technological risks involve artificial intelligence misaligned with human goals, biotechnology, and nanotechnology. Insufficient or malign global governance creates social and political risks like global war and nuclear holocaust. Biological warfare using genetically modified organisms and cyberwarfare destroying critical infrastructure like the electrical grid constitute further dangers. Radiological warfare using weapons such as large cobalt bombs adds to the list. Other risks include environmental degradation, extinction of species, famine resulting from non-equitable resource distribution, and human overpopulation or underpopulation. Crop failures and non-sustainable agriculture remain significant concerns. Experts increasingly worry about cascading risks, the use of AI for bioengineering, and threats involving nuclear weapons systems.
What is the definition of global catastrophic risk?
Global catastrophic risks are hypothetical events capable of damaging human well-being on a global scale. These risks generally refer to events that could inflict serious damage to human well-being worldwide without necessarily causing total extinction.
When did the Black Death occur and how many people died?
The Black Death resulted in the deaths of a third of Europe's population and 10% of the global population at the time. This historical event serves as an example of large catastrophes humanity has suffered before.
Who created the Doomsday Clock and when was it established?
The Bulletin of the Atomic Scientists established the Doomsday Clock in 1947 to study risks associated with nuclear war and energy. The organization itself was founded in 1945 to examine these dangers.
Where is the Svalbard Global Seed Vault located and what does it contain?
The Svalbard Global Seed Vault is buried inside a mountain on an island in the Arctic region. It holds 2.5 billion seeds from over 100 countries to preserve world crops against future disasters.
Why do experts worry about cascading risks involving artificial intelligence?
Experts increasingly worry about cascading risks because they involve the use of AI for bioengineering and threats involving nuclear weapons systems. These risks change rapidly as technology advances and background conditions shift.
Research into the nature and mitigation of global catastrophic risks faces unique challenges that prevent usual scientific rigour. It is neither feasible nor ethical to study these risks experimentally. Carl Sagan expressed this regarding nuclear war by stating that understanding its long-term consequences is not a problem amenable to experimental verification. Many catastrophic risks change rapidly as technology advances and background conditions shift. Geopolitical conditions evolve constantly, complicating predictions. The general difficulty of accurately predicting the future over long timescales presents another hurdle. Anthropogenic risks depend on complex human political, economic, and social systems. Unforeseeable black swan extinction events may occur, adding methodological problems. Humanity has never suffered an existential catastrophe, making it necessarily unprecedented. Therefore, existential risks pose unique challenges to prediction due to observation selection effects. Unlike most events, the failure of complete extinction in the past is not evidence against their likelihood in the future. Every world experiencing such extinction has gone unobserved by humanity regardless of frequency. These anthropic issues can be avoided by looking at evidence without selection effects, like asteroid impact craters on the Moon. Evaluating the likely impact of new technology directly offers another path forward. Studying local civilizational collapses throughout history provides instructive dynamics for understanding unrecoverable global collapse. Civilizations like the Roman Empire ended with loss of centralized governance and major infrastructure decline. Medieval Europe survived the Black Death without suffering civilization collapse despite losing 25 to 50 percent of its population.
Psychology Of Risk Perception
Numerous cognitive biases influence people's judgment regarding the importance of existential risks. Scope insensitivity affects how bad people consider the extinction of the human race to be. When motivated to donate money to altruistic causes, the quantity given does not increase linearly with magnitude. People are roughly as willing to prevent the deaths of 200,000 or 2,000 birds. Similarly, individuals often feel more concerned about threats to specific individuals than larger groups. Eliezer Yudkowsky theorizes that scope neglect plays a role in public perception of existential risks. Substantially larger numbers, such as 500 million deaths, trigger different modes of thinking. Qualitatively different scenarios like the extinction of the entire human species seem to activate distinct mental processes. People who would never dream of hurting a child hear of existential risk and say the human species might not deserve to survive. All past predictions of human extinction have proven false. To some observers, this makes future warnings less credible. Nick Bostrom argues that the absence of human extinction in the past is weak evidence for no future extinction due to survivor bias. Sociobiologist E. O. Wilson argued that evolutionary biologists contend it was advantageous during all but the last few millennia of Homo existence. A premium was placed on close attention to the near future and early reproduction. Disasters occurring once every few centuries were forgotten or transmuted into myth.
Multi-Layer Defense Strategies
Defense in depth categorizes risk mitigation measures into three layers of defense. Prevention reduces the probability of a catastrophe occurring in the first place. Measures to prevent outbreaks of new highly infectious diseases serve as examples. Response prevents the scaling of a catastrophe to the global level. Preventing escalation of small-scale nuclear exchange into all-out nuclear war illustrates this layer. Resilience increases humanity's resilience against extinction when faced with global catastrophes. Measures increasing food security during a nuclear winter exemplify this approach. Human extinction becomes most likely when all three defenses remain weak. Risks we are unlikely to prevent, respond to successfully, or be resilient against create maximum danger. The unprecedented nature of existential risks poses special challenges in designing mitigation measures. Humanity will not be able to learn from a track record of previous events since none exist. Some researchers argue both research and other initiatives relating to existential risk are underfunded. Nick Bostrom states more research has been done on Star Trek, snowboarding, or dung beetles than on existential risks. As of 2020, the Biological Weapons Convention organization had an annual budget of US$1.4 million. Survival planning involves establishing self-sufficient remote settlements specifically created for surviving global disasters. Economist Robin Hanson argues a refuge permanently housing as few as 100 people would significantly improve survival chances.
Funding And Survival Planning
Food storage has been proposed globally but carries high monetary costs. It could contribute to current millions of deaths per year due to malnutrition. In 2022, a team led by David Denkenberger modeled cost-effectiveness of resilient foods to artificial general intelligence safety. They found approximately 98-99% confidence for higher marginal impact of work on resilient foods. Some survivalists stock retreats with multiple-year food supplies. The Svalbard Global Seed Vault is buried inside a mountain on an island in the Arctic. It holds 2.5 billion seeds from over 100 countries as precaution to preserve world crops. Surrounding rock measures specific dimensions while refrigerators keep the vault at minus 18 degrees Celsius powered by locally sourced coal. If society continues functioning and biosphere remains habitable, calorie needs might be met during extended absence of sunlight given sufficient advance planning. Conjectured solutions include growing mushrooms on dead plant biomass left after catastrophe. Converting cellulose to sugar or feeding natural gas to methane-digesting bacteria offers theoretical pathways. Space colonization serves as an alternative to improve odds of surviving extinction scenarios. Solutions of this scope may require megascale engineering. Astrophysicist Stephen Hawking advocated colonizing other planets within the Solar System once technology progresses sufficiently. This aims to improve human survival chances from planet-wide events such as global thermonuclear war.
Global Governance Mechanisms
Insufficient global governance creates risks in social and political domains. Governance mechanisms develop more slowly than technological and social change. Concerns exist from governments, private sector, and general public about lack of efficient risk dealing mechanisms. Negotiating and adjudicating between diverse conflicting interests requires robust frameworks. Interconnectedness of global systemic risks further underlines these concerns. In absence or anticipation of global governance, national governments can act individually to better understand and prepare for catastrophes. The Club of Rome called for greater climate change action in 2018 and published its Climate Emergency Plan. It proposes ten action points to limit global average temperature increase to 1.5 degrees Celsius. In 2019, the Club published the more comprehensive Planetary Emergency Plan. Evidence suggests collectively engaging with emotional experiences emerging during contemplating vulnerability allows adaptive responses. Supportive collective engagement leads to growth in resilience, psychological flexibility, tolerance of emotional experiences, and community engagement. Organizations like the Bulletin of the Atomic Scientists established in 1945 study risks associated with nuclear war and energy. They maintain the Doomsday Clock established in 1947. The Foresight Institute founded by K. Eric Drexler examines nanotechnology risks and benefits. Beginning after 2000, growing numbers of scientists, philosophers, and tech billionaires created organizations devoted to studying global risks. Independent NGOs include the Machine Intelligence Research Institute established in 2000 aiming to reduce AI-caused catastrophe risk. Donors include Peter Thiel and Jed McCaleb. The Nuclear Threat Initiative seeks reducing threats from nuclear, biological, and chemical dangers while containing damage after events.