The sun is slowly expanding and brightening, and over the next few billion years it will eventually desiccate Earth, leaving it hot, brown and uninhabitable. About 7.6 billion years from now, the sun will reach its maximum size as a red giant: its surface will extend beyond Earth’s orbit today by 20 percent and will shine 3,000 times brighter. In its final stage, the sun will collapse into a white dwarf.  David Appel in Scientific American September 1, 2008. Image credit: NASA and Goddard

The sun is slowly expanding and brightening, and over the next few billion years it will eventually desiccate Earth, leaving it hot, brown and uninhabitable. About 7.6 billion years from now, the sun will reach its maximum size as a red giant: its surface will extend beyond Earth’s orbit today by 20 percent and will shine 3,000 times brighter. In its final stage, the sun will collapse into a white dwarf.
David Appel in Scientific American September 1, 2008.
Image credit: NASA and Goddard


RELEVANCE TO TOK

Existential risks threaten human extinction or the permanent collapse of civilization.

Detailed exploration of the threats themselves is not the stuff of TOK. However, a backdrop an omnipresent existential threat certainly has an influence on young adult knowing and thinking. This topic merits apportioning some time for meta-reflection.

During the Artificial Intelligence class activities it became clear that the future development of autonomous, superintelligent AI is not science fiction. AGI (Artificial General Intelligence)—surpassing multiple human capabilities—is not on the horizon yet. AGI may not be the most pressing issue, but there is immediate existential risk with the narrow AI that we already have. Ameliorating this will hinge on the proactive alignment of machine and human objectives. “Slaughterbots” (swarming lethal autonomous weapons) exemplify how things could go very wrong. Their potential for misuse in the hands of malicious bad actors is clear; but a nightmarish outcome could also be accidental—the unforeseen result of embedding erroneous algorithms. Stuart Russell (2019: X) declares in Human Compatible: Artificial Intelligence and the Problem of Control:

Machines are beneficial to the extent that their actions can be expected to achieve our objectives.

On the same page, in soothsayer mode, he reminds us that—

For thousands of years we have known the perils of getting exactly what you wish for. In every story where someone is granted three wishes, the third wish is always to undo the first two wishes.


CONNECTING WITH KNOWLEDGE AND POLITICS

Discussion of existential risk overlaps strongly with the Knowledge and politics optional theme which opens with Ten most pressing world problems class activity.

CLASS ACTIVITY I
PRIORITIZING EXISTENTIAL RISK

Begin the class by asking students—in constructivist spirit—what is existential risk? Unpack the term briefly.

If individual students bring it up specifically, you could mention that the “Existentialism” (think: Sartre, de Beauvoir and Camus) arose partially from the horrors of WWII. After resolving this or any other emergent “knowledge and language” clarification, move on swiftly to establish the following definition:

Existential risks threaten human extinction or the permanent collapse of civilization.

Next briefly encapsulate what we have learned about the existential dangers that loom narrow IA—reminding students about “slaughterbots” and utilizing both Stuart Russell quotes above. Inform the class that they are about to widen the scope of the discussion by exploring multiple existential threats in three groups.

1. Prepare in advance three sets of eight paper strips containing the following existential risks:

Climate change catastrophe
Asteroid impact
Nuclear destruction
Global pandemic
AI singularity
Alien encounter
Our sun morphs into a red giant
What else?

2. Designate the three groups. Establish a volunteer facilitator for each group and hand them a set of strips. The facilitator hands out a strip to each group member randomly. Allow 8 timed minutes for each student to very briefly tell the group what is printed on their strip and to add some personal editorial. For example, a student with the asteroid strip could provide a one sentence mention of the dinosaur extinction 65 million years ago!

One strip says “What else?” This is an opportunity for a student to add an existential risk not on the original list.

3. Next the group should collectively arrange the strips in order of the seriousness of existential threat; with the most serious first. The only rule for this part of the activity is that consensus must be reached. It is likely that students will debate amongst themselves more precise criteria for their list. The vagueness of the term “seriousness” was a deliberate choice for the activity.

4. Finally the facilitator finalizes the order of the strips and fixes them to the desk with a strip of clear tape. The whole class is then invited to amble around the three exhibition desks to take in the results of the other groups; before settling back to brief class discussion?

GENERATIVE QUESTIONS

What just happened?

Did anyone use utilize the “What else” category?
If so, what did you propose?|

How difficult was it to establish consensus on the criteria for creating your list?


CLASS ACTIVITY II
EXISTENTIAL RISK GALLERY

Settle the class then show the images and videos in the existential risk gallery. Include the NASA image above of the sun engulfing the earth several billion years from now. Solicit spontaneous comments.

West Antarctica’s glaciers and floating ice shelves are becoming increasingly unstable.  Image: Cristopher Michael

West Antarctica’s glaciers and floating ice shelves are becoming increasingly unstable.
Image: Cristopher Michael

We are within the vicinity of thousands of Near-Earth Objects (NEOs), some of which – Potentially Hazardous Asteroids (PHAs) – carry the risk of impacting Earth causing major damage to infrastructure and loss of life. Image: SciTechDaily

We are within the vicinity of thousands of Near-Earth Objects (NEOs), some of which – Potentially Hazardous Asteroids (PHAs) – carry the risk of impacting Earth causing major damage to infrastructure and loss of life. Image: SciTechDaily

The launch of an unarmed Minuteman III intercontinental ballistic missile during a developmental test early on Feb. 5, 2020, at Vandenberg Air Force Base,  Image: Senior Airman Clayton Wear / U.S. Air Force via AP file

The launch of an unarmed Minuteman III intercontinental ballistic missile during a developmental test early on Feb. 5, 2020, at Vandenberg Air Force Base,
Image: Senior Airman Clayton Wear / U.S. Air Force via AP file

Sapient heptapod extraterrestrial species encounter in the 2016 movie Arrival.

Sapient heptapod extraterrestrial species encounter in the 2016 movie Arrival.

Next invite students to find a conversation partner and sit together. Allow 10 minutes to explore the knowledge questions below. After 6-8 minutes of conversation in pairs; call the class to order. Invite whole class sharing and discussion. Facilitate carefully. Strong sentiments are likely to be expressed.

GENERATIVE QUESTIONS

To what extent do you find yourself thinking about existential risks in your everyday life?

How does existential risk play into you thinking about science and technology?

Overall do you feel optimism or pessimism about our collective ability to navigate existential risk?
Can we overcome the “tragedy of the commons”?

As a young adult you have not perpetrated existential risk but your generation will be implicated in attempting to reduce them. Discuss?