Skip to content

Sam Altman says superintelligence is near

Sam Altman, CEO of OpenAI, writes that he's almost bored at the prospect of developing mere human-level intelligence. He's already looking past that:

We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word.... Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important.

At Ars Technica, Benj Edwards blandly explains what this means:

Tech companies don't say this out loud very often, but AGI would be useful for them because it could replace many human employees with software.... The potential societal downsides of this could be considerable.

Considerable indeed—and not just at tech companies. In punchier language, Edwards means that AI at human-level and above will produce massive, permanent unemployment and probably spark a huge populist rebellion because rich people aren't yet prepared to accept what this all means: a gargantuan transfer of wealth that's not related to hired labor. There's no real alternative, and eventually we'll all accept it. Until then, though, the transition is going to be a shitshow.

I'm not sure exactly how optimistic Altman is, but for now I'll stick with 2033 as the year superintelligence becomes real. That's only a hundred months away, and I'll get to see it if I can hang on to age 74. It's gonna be close.

71 thoughts on “Sam Altman says superintelligence is near

  1. mistermeyer

    Rule 1* of journalism: Define an abbreviation or acronym the first time you use it. Falling back on "Hey, it was defined in the article I linked to!" doesn't count, because making someone go elsewhere to discover that AGI can mean "Artificial General Intelligence" is no different than MAKING THE USER LOOK UP THE TERM. That is to say, useless. Geez.

    * - I have no idea if this is rule 1, but it's pretty important and it's right up there, because the goal of journalism is that the reader actually understands what the hell you're saying.

  2. MrPug

    I predict that AI will advance to Muskian levels of hallucinating and mendacious gibberish by 2032 beating Drums 2033 prediction by 1 full year. And the MAI will achieve that without the ketamine and Adderall. What a boon to civilization!

Comments are closed.