Skip to content

Quote of the day: We’re safe, for now

From the New York Times:

Today’s A.I. technology cannot destroy humanity.

Whew. Although how do they know?

Anyway, this is all in the context of CEO Sam Altman's firing from OpenAI, the artificial intelligence company that makes ChatGPT among other things. It's still a mystery why he was suddenly let go on Friday, but the hot gossip at the moment is that the board of directors was worried about Altman pushing AI to the point where it would be able to destroy humanity.

You have to admit, that would be the coolest reason ever for a CEO to get fired. And the AI community is just loony enough that this is faintly plausible. However, the OpenAI board includes at least two normal people (the CEO of Quora and a RAND scientist), so I have my doubts. It's one thing for the chief scientist of OpenAI (who's also on the board) to suddenly get paranoid over the fate of humanity—anything can happen, after all—but ordinary folks are generally a little more levelheaded.

On the other hand, the company's COO did say that the firing had nothing to do with "malfeasance"; it was just a "breakdown in communication between Sam and the board." Maybe. But that would have to be a helluva breakdown to set in motion the dramatic regicide of a guy who's practically a legend in the industry.

Eventually someone is going to have to talk. This is all just too weird.

24 thoughts on “Quote of the day: We’re safe, for now

  1. kaleberg

    Is someone ever going to explain just how AI threatens humanity? People chunter on and on about this but never offer as much as a vague explanation.

    For some threats, there seem to be believable paths to doom:

    - Global warming - destroys crops, floods cities, causes heat related deaths.

    - Genetic engineering - starts epidemics, creates resistant pests, introduces new genetic disorders.

    - Fast food - encourages obesity, weakens immune systems, destroys critical ecology.

    I'm not saying they are right, but at least there are comprehensible causal pathways that lead from the technology to human destruction.What, even approximately, is AI going to do?

    1. Jasper_in_Boston

      AI probably doesn’t threaten humanity, but it’s not out of the question that “general intelligence” AI could do so in the future.

    2. ProgressOne

      Researchers have been exploring how advanced AI systems might jump off the rails since the 1950s. See the Wikipedia page "AI alignment" for things that can go wrong. To me, this is all much scarier than global warming, genetic engineering, or even nuclear weapons. At least humans control the outcomes for these items.

      1. kaleberg

        That implies that humans gave them control of those systems. All kinds of garbage software can kill and injure people. AI, for some reason, is the scary kind. I'd love to see an explanation of why large language models are more likely to kill me than, let's say, the ill-crafted algorithm running my automatic transmission.

        1. ProgressOne

          AI systems may be an existential threat, meaning it kills us all or enslaves us. That is why it presents such unique dangers. Once systems mimic human consciousness, exceed humans in intelligence, and can rapidly learn for itself – it’s unclear what it might do. If a powerful AI system that is connected to the internet starts making strategic plans that includes self-preservation of itself, things could get dicey. Regardless of the algorithmic moral code AI developers embedded in the system’s software, the system may decide to revise it. The AI system may begin taking actions to limit the ability of humans to turn it off or alter its software. When humans try to do this, the AI system might conclude it needs to get more aggressive to save itself. The system starts taking over some AI-adjacent systems to create a cyber-barrier to block humans from tampering with the AI system. As AI takes over systems, it also gains control of the hardware for those systems. And so on.

    3. roboto

      None of those are major threats, let alone existential ones. (Existential threat = no more humans, not 1,000 die in a heat wave in 2067. )

    4. James B. Shearer

      "....What, even approximately, is AI going to do?"

      The idea would be that a small group of people are in control of an AI breakthrough and use it to conquer the world and eliminate all possible sources of opposition. Then the small group of people becomes one person (think Stalin) who goes nuts, gives the AI crazy orders and then dies leaving the AI in total control and bound by crazy orders.

      1. aldoushickman

        I don't think you have to go that far to highlight examples. Powerful and complex systems cause unexpected actions--consider how social media algorithms (designed to increase the amount of time users were emotionally engaged with posts and thereby deliver more ad revenue, but which also had the effect of siloing media culture and enhancing in-group legitimacy of complete raving nonsense like Qanon) very arguably helped put Trump over the top in 2016. Nobody necessarily _intended_ something large and history-swaying like that to happen, but it did.

        AI would be similar, just potentially much, much more powerful.

    5. Steve C

      Imagine an artificial intelligence that gains consciousness. (That is the hard part).
      Assume it is given a significant amount of resources, so that it can gradually evolve itself, not biologically on a time scale of years per generation, but electronically on the order of seconds. It is possible that it will quickly get to be more intelligent than us. Unless the initial programmers are clever enough to keep priorities constant over thousands of generations, the eventual consciousness may have goals that are not consistent with human goals. Let's say its goal is to access more and more computing power. It could connect to the internet, and hack into computer systems that control, well, just about anything. It then threatens humans with launching nuclear bombs or turning off power systems unless we devote resources to building more computing power for it. When we become redundant, the resources used to keep us alive would be used to achieve its goals.
      This is one scenario. With extreme intelligence, almost infinite and instantaneous knowledge, and no morals, family, culture or human empathy at all, there are many many ways that a computer-based consciousness could destroy humanity.

        1. Steve C

          Individuals today can't scan hundreds of thousands of data sources and do billions of calculations per second to find answers.

      1. KenSchulz

        If we’re so stupid as to give control of nuclear weapons or the power grid to systems we don’t understand, we’d deserve our fate.

  2. D_Ohrk_E1

    With the release of GPT4-Turbo comes the ability to create your own custom GPT. How does Quora survive when the cost to build a Quora is no longer a barrier for many? And instead of the way Quora operates, you don't need to rely/wait on people to answer questions and imperfectly at that.

  3. bouncing_b

    One scenario would be if AI makes it possible for nefarious actors to engineer novel bioweapons.

    But what I worry more about is choices we make to use AI to see through hard problems. This could especially be militaries when large numbers of cheap drones can overwhelm modern armies and we may not be sure who launched them. While the generals are arguing, it could be very tempting to give an AI decision-making responsibilities over retaliation and counterattacks. That could have unknowable consequences up to extinction.

    Or even old militaries. What might an AI have concluded we should do during the Cuban missile crisis?

    1. kaleberg

      By modern standards, we didn't need all that much computer power to build hydrogen bombs. The barrier to coming up with something more dangerous has nothing to do with available computrons, and AI seems no more likely than any given individual to make the necessary breakthroughs.

      1. Steve C

        You do realize that we needed hundreds of the smartest people on earth, significant resources from the most powerful country on earth, and many years to achieve that. But yeah, no supercomputers.

        Supercomputers may allow individuals to do it, instead of powerful nations.
        I'm sure you see the problem now.

  4. Jasper_in_Boston

    One theory I heard that makes sense to me is the possibility that they haven’t properly secured data rights, but Altman just papered over this problem until it was no longer paper-able.

  5. Bluto_Blutarski

    The RAND scientist (Tasha McCauley) is a proponent of the "effective altruism" movement. I guess we can debate how "normal" that makes her, but she holds some whacky ideas.

    1. Steve C

      Effective altruism is the idea that we should scientifically evaluate opportunities for helping others, and pursue those most likely to have significant effect.

      Are you saying that is a bad thing?

      1. Joseph Harbin

        When in practice we've seen that EA is used to justify an ethical wrong for a supposed greater ethical good, we should say it's deeply flawed at best.

        1. Doctor Jay

          Every evil doer ever has a speech wherein they describe how they are *really* the good guy. They invoke all manner of principles of good when they do this. It doesn't make it true that those principles are flawed.

  6. azumbrunn

    Of course it's weird, it's AI!

    Seriously though, the most likely problem in such cases is money in some way or other. Who gets how much? is often the question that triggers conflict.

Comments are closed.