Death by AI

Published by

on

The most likely cause of death today is AI.

It’s a reasonable statement. The most common cause of death right now is ischemic heart disease, which kills around 13% of people worldwide. However, if we assume that the chance of superintelligent AI turns against humans is higher than 13%, and that this would wipe out all of humanity withing the next 20 years, this would make AI the most likely cause of death.

It’s a reasonable statement, and there’s a good chance it’s true. It’s natural to think that it’s silly to worry about AI killing us. Why would it want to? It’s what I thought until Superintelligence by Nick Bostrom came out in 2014. He shows that there is no easy and failproof way of ensuring that AI isn’t going to have a detrimental effect on humanity. It’s very hard to program its values in such a way that they are aligned with what human would see as a good outcome, and it may be impossible.

Opinions vary widely what the actual probability of AI destroying humanity is. Anything from far less than 1% to near-certainty is plausible. Given that uncertainty, and the gravity of the problem, it’s surprising how little resistance to AI research there is. There are no protests like the ones to warn of global warming we have grown accustomed to. While global warming is a serious problem, there is no plausible scenario that leads to human extinction anytime soon. Yet for AI, no-one argues that human extinction is a plausible outcome, the only question is how likely this outcome is.

In If Anyone Builds It, Everyone Dies, Eliezer Yudkowsky and Nate Soares argue that we need an international control treaty for advanced AI research right now to avoid extinction. The treaty would be similar to the nuclear non-proliferation treaty. I cannot find a fault with their argument. Here is a good review of the book, and here is an interview with Soares that covers the most important points.

Disclaimer: I don’t see a contradiction between using AI and being worried about AI killing everyone, since the dangers come from future AI, not current AI. I’m an AI superuser and over the last two years I’ve spent hundreds of hours building and optimizing LLM-based systems. I have a good working understanding of how AI works compared to most people. I’ve written a few short blog posts on AI. However, AI is just a small part of what I do, and my understanding of AIs is mediocre compared to those who built the models or who spend their entire working lives thinking about them. It’s even worse in absolute terms, because even the best people in AI have very limited understanding of how the models they build actually work.

2 responses to “Death by AI”

  1. Lack of Desperation – Nehaveigur Avatar

    […] I see things differently is that I haven’t yet fully realized how messed up everything is: The AI apocalypse, societal breakdown, wars, the end of democracy, global warming. Things look gloomier than they […]

    Like

  2. The Hundred-Light-Year Diary – Nehaveigur Avatar

    […] about forecasting and AI, I sometimes remember this story by Greg Egan. It was published as part of his collection […]

    Like