Would you rather vaporise in an asteroid strike, boil during a geomagnetic shift or be terminated by an artificial intelligence? Which scenario is more likely?
I love an “End of the World” type movie, where society breaks down and humans revert to tribal behaviours – zombie movies, asteroid disasters, love in a period of magnetic pole reversal – I watch them all. But which of these threats has a scientific basis? Let’s first dismiss zombies out of hand, since they break the laws of thermodynamics and are fictitious. This leaves us with:
In any given year odds are roughly 1 in 300,000
You’ve seen it. A large asteroid is hurtling towards Earth, threatening an extinction level event, or at least reduce us to packs of cave dwellers. The collision vaporises anything in the impact zone and throws up enough dirt and ash to cause a global winter and food shortages.
The asteroid needs to be larger than 1 kilometre to wipe us out – NASA knows of no asteroid or comet currently on a collision course with Earth, so the probability of a major collision is very small. No one has been killed in the last 1000 years.
Geomagnetic Reversal (North becomes South)
Could be happening now!
Changes in the Earth’s molten outer core could trigger a reversal of the planet’s magnetic field. This has occurred hundreds of times in the past, according to the geological record, including once since humans evolved. Although the reversal itself would take hundreds of years, a reduced magnetic field leaves the planet open to cosmic radiation. What would be the effect on us humans?
The Earth’s Core. Graphic: NASA
No mass extinctions coincide with previous geomagnetic reversals, so apart from blackouts caused by overloaded electrical grids and the increased radiation, we will likely manage well. Currently we see increased radiation levels during solar storms, and appear to cope all right (so far) – so no need for a doomsday shelter just yet.
Terminators – Maniacal Artificial Intelligence
Probability unknown….ominously unknown.
Both Elon Musk and Stephen Hawking have warned us of the threat posed by Artificial Intelligence. Elon is concerned that a runaway digital super intelligence will take aim at their less intelligent creators, and that regulation is required to ensure AI is developed safely. It seems fanciful, but South Korea already has an automated killing machine – no human is present in the decision making process. Is it a big leap to imagine a legion of robots picking off rebels in a remote desert without the risk of casualties? I do think Elon is giving the machines too much credit. Current AIs are built by humans to perform a specific job. Many humans are involved in design, build, training and testing of the system. Mark Zuckerberg is among the group people who also think Elon’s view is a bit alarmist. I feel that the human race is safe for the mean time, as long as we avoid a walk along the border between North and South Korea.
But while each of these three scenarios seem pretty unlikely, keep in mind we could still be taken out by rogue black hole at any time without notice!