Skip to content

Some AI experts are really rattled by the whole extinction thing

Philip Tetlock's Forecasting Research Institute conducted an exercise last year on the topic of human extinction events. Why? Who knows. Their report was released yesterday and, long story short, a group of various experts predicted a roughly 10-20% chance of catastrophe by 2100 (10% of human race dies) and a 1-6% chance of total extinction (fewer than 5,000 humans left). This is the total sum of the risk from five possible scenarios:

  • Artificial intelligence
  • Nuclear
  • Engineered virus
  • Natural virus
  • Meteor, sun going nova, etc.

Let me say from the start that I think this is kind of nuts. I'd personally put the overall risk of extinction from everything combined at about 0.1% or so. But that's just me. I'm an eternal optimist, I guess.

The proximate motivation for the study, of course, was the recent surge of interest in the possibility of some future AI going nuts and killing everyone. AI experts overall predicted a 13% chance of AIs causing an extinction level event, and I found this breakdown instructive:

Apparently there's a group of AI experts who are absolutely obsessed with extinction and have spent more than 1,000 hours thinking about it. These people seem to have thought themselves into a frenzy and they're obviously driving up the average a lot. The median extinction forecast, which eliminates the influence of the extreme outliers, is only 3%. Still too high, I think, but not completely crazy.

Unlike some things, this is not a case where thinking harder does you any good. There's really no concrete evidence one way or the other about the chances of AI-caused extinction, just speculation, and going down an extinction rabbit hole is little more than a fast path to the booby hatch. Just keep in mind that any possible super-threat—AI or otherwise—will almost certainly be met by a countervailing super-defense. COVID-19 was a dangerous pathogen, but advancing technology allowed us to create a vaccine in record time. Nuclear weapons are being met with missile defense systems. Climate change is being met by renewables and, maybe someday, geoengineering. AIs will be met by other AIs. So chill.

17 thoughts on “Some AI experts are really rattled by the whole extinction thing

  1. SC-Dem

    Not to disagree with the general tenor of the comments, but we've put way more that a $100B into anti-ballistic missile defense systems over the last 40 yrs without adjusting for inflation. Utter waste of money. Worse than the B-21, which no sensible country would spend money on.

    It is interesting that Trump dismantled so much of the nuclear arms control efforts of the last 40 years while he was in office. Did Putin tell him to?

  2. Chondrite23

    The sun is not going to go nova. It is not that kind of star.

    The AI thing might be possible. Not because the AI will turn into some sort of terminator. More like we would become very dependent on it then it would suffer a massive crash or error. Kind of like a Tesla slamming on the brakes on the freeway because some commercial sign at the side of the road looks like a stop sign.

    The Higg’s field going to ground state is a definite thing to worry about at night when you can’t sleep.

    I would imagine a cascade of problems that would cause things to grind to a halt such that a large fraction of the population dies. Kind of like the supply chain issues we had on steroids.

  3. D_Ohrk_E1

    Is there a rubicon of biodiversity, whereby whole Earth systems collapse towards a minimal (safe) operating mode?

    I think you overestimate our ability to defeat an engineered virus -- one that isn't targeting humans, but one that is surreptitiously designed to target a class of life that is at the bottom of the food chain which then disrupts the entire system. The most common response to the spreading of a highly pathogenic, contagious virus is to cull as many infected animals and hope for the best.

    Of course people who spend more time thinking about existential risk come to assess the risk as higher -- the longer you think about it, the more you realize just how many permutations of possibilities there are, and how much of it relies on social trust, incompetence, and luck.

  4. Scurra

    Yeah, that last block is largely comprised of the "LessWrong" / "Rationalist" community who have been obsessed with the 'AI extinction event' for years - well before the current mania - and have got more and more extremely, well, paranoid about it as time has gone on. (I would suggest you look up "Roko's Basilisk" except that it's too ridiculous for words.)

    There are various documents from this group that have even proposed things like assassinating all the major AI researchers etc. It is extremely unclear how seriously one is supposed to take this, but everything they have said and done otherwise suggests that they take other stuff very seriously.

    1. KenSchulz

      Well, I did look into the Roko foolishness. As with so much that is written about “AI”, including much by our worthy host, I regard it as ‘not even wrong’.

  5. glipsnort

    Big selection bias here. Those who are already worried about some threat are the ones who will spend a lot of time thinking about it.

    1. fabric5000

      Climate Change will not cause extinction by 2100. It ain’t gonna be pretty then with a bunch of major cities underwater, but we’ll be around.

      Or maybe we’ll just hire the Dutch and address that problem as well.

      1. MrPug

        I agree that climate change will get us under 5000, but it could every easily cause a large number of deaths in that 10% range. And could also spawn new diseases that could also result in a large number of deaths.

  6. Boronx

    We won't be smart enough to build a counter ai since the evil ai will have been designed by another supposedly good ai.

    We'll have to ask the evil ai to build its own counter, which might work.

  7. ScentOfViolets

    I'm pretty sure I know what -- or rather, who -- is driving all this AI extinction talk. People like Eliezer Yudkowsky and Robin Hanson on platforms like Less Wrong and Overcoming Bias. The former is famous for being an incel hotspot, BTW. Idiots, fanatics, and grifters and fanatics, they are.

  8. name99

    Calling fewer than 5000 humans "total extinction" seems to suggest that their agenda is hysteria rather than understanding.

    There appear to have been bottlenecks of that size in the past of various forms, eg in Africa in the paleolithic, or amongst some of the waves to North America,

  9. pjcamp1905

    There won't be time for AI. I think climate change will off us. I think that's the only reasonable solution to the Fermi Paradox. Evolution optimizes for short term benefit.

Comments are closed.