Skip to content

Yeah, AI is going to be (very, very) big business

In the LA Times, Brian Merchant writes that OpenAI itself isn't the only thing that self-destructed this weekend:

All the while, something else went up in flames: the fiction that anything other than the profit motive is going to govern how AI gets developed and deployed. Concerns about “AI safety” are going to be steamrolled by the tech giants itching to tap in to a new revenue stream every time.

....However this plays out, it has already succeeded in underlining how aggressively [Sam Altman] has been pursuing business interests. For most tech titans, this would be a “well, duh” situation....

I'm going to stop right there because, yes, this is a "well, duh" situation. After all, AI is headed toward being the most commercially successful product in human history. It will be bigger than railroads, bigger than cars, bigger than oil.

What else did anyone expect? Over the next few decades AI is poised to replace most human labor. That's about as big as an industry can get, and there's nothing much that can stop it. We haven't even stopped the proliferation of fossil fuels or nuclear weapons, and those are nice, easy, concrete things. By contrast, software is like a phantom: impossible to get your arms around and impossible to effectively regulate.

Adoption of AI is going to be existential for practically every industry currently in existence, spawning dozens of huge AI providers around the world. Demand is going to be off the charts. Growth will be astronomic.

This has been obvious for a long time—or it should have been, anyway. But if it really took the OpenAI debacle to make it clear, then maybe OpenAI's board really has done the world a favor. Getting our heads out of the sand is the first step toward enlightenment.

25 thoughts on “Yeah, AI is going to be (very, very) big business

  1. ddoubleday

    This is an easy prediction that seems right, won't be disprovable in the short term, and is probably quite wrong.

    "AI" as it is being defined today (Large Language Models) has quite a few problems associated with it, not least of which is enormous required compute resources and power usage costs. Sam Altman is probably trying to move fast because that is the essential model for the modern entrepreneur: hype to the heavens, move quickly, and cash out before the industry reaches the Valley of Disillusionment.

    1. kahner

      I don't think LLM's is "AI as it is being defined today". It's simply the most prominent example currently in widespread use and demonstrates the power of even this limited aspect of AI. Kevin's clearly very confident that we're at the start of a tech revolution where AI expands and improves rapidly, far more so that myself, but I don't think he's under the illusion that LLM's have a lot of problems and limitations.

  2. D_Ohrk_E1

    What happens when robotics are advanced enough to replace humans in all things, and are combined with AGI? Do people stop "working" and start doing things that they enjoy?

    So I have this nascent theory that just before the turn of the century (or thereabouts), advances in biology and specifically brain interface with mechanical systems, will end up with people implanting their brains into robotic bodies in order to travel long distances with zero ill effects and living on otherwise inhospitable planets.

    Then, at that point in the convergence of technology, it will be nearly impossible to tell who is a hybrid, a full bot, or a full human.

      1. D_Ohrk_E1

        The cheaper, more common alternative to having your brain implanted, is to have a copy of your memory uploaded into a robotic brain. You can keep your legacy but not your "soul".

        Data will be impossible to tell from Captain Picard, and might actually be the memories of someone who once lived.

  3. Wichitawstraw

    I just saw this same discussion on threads. In what universe was Open AI going to be a firewall for AI abuse. They don't own AI.

  4. Joseph Harbin

    Fortunes can change swiftly.

    On Friday's "bad" news, MSFT lost $47 billion in market cap.
    On today's "good" news, MSFT gained $58 billion in market cap.

    What looked on Friday like the company had placed its bet on the wrong horse now looks like it's the horse with the best post position.

    1. Amil Eoj

      I suspect that reversal is down to a truly epic bit of damage control by Satya Nadella, positioning MSFT as a safe haven not only for Altman but for any other Open AI refugees willing to follow him--and this despite the possibility that such a mass exodus could well doom Microsoft's $10 bil investment in Open AI itself.

      Nadella seems to have come very quickly to the judgment that, in this domain, "securing the resources" was far more important than chasing sunk costs. And if Kevin is even half right about the future of AI, that was almost certainly the right call.

      1. ScentOfViolets

        As I understand it -- and I am very, very far away from anyone who knows this crowd, let alone make judgement on a topic I know very little about -- it's actually Ilya Sutskever who made most of the 'breakthrough' advances. If true then, a) OpenAI doesn't need Sam Alman, and b) I tend to believe the theories that an unseemly rush to market was the main ingredient of the meatloaf the lead to S. A.'s dismissal.

        In any event, LLM's as they are currently formulated are something of a niche development in the general field of 'Artificial Intelligence'. Heh. artificial statistics is more like it.

  5. jdubs

    More money will be made (and lost) rebranding your service as 'AI' than will ever be made on anything resembling actual AI.

    There has always been a big business selling magic beans.... but in a seedy underbelly of the business world kind of way, however the new magic beans will be on an entirely different scale.

    1. dotkaye

      "More money will be made (and lost) rebranding your service as 'AI' than will ever be made on anything resembling actual AI."

      exactly. I work in IT and the stampede to rebrand everything with an AI sticker is going to have ugly consequences..

      As commenter nehinks on Ars Technica noted,
      "Our current AI approach is not going to automagically suddenly evolve to become greater than the sun of its parts. There's no mechanism there for sentience or sapience. But that's not to say they can't do a LOT of damage to society with the current "fake" AI."

      All of the commenting on AI tends to miss that we don't yet have AI, nor yet a reasonable path to get to AI. The text generation machines we have now give a fair simulacrum which can convince those who want to believe, but their hallucinations will never be cured.

  6. ScentOfViolets

    There ain't going to be no AI replacing humans until artificial hands become a thing. I would think this would be obvious, but maybe chairwarmers gotta chair.

    1. mandolin

      These last two comments address a very important lack in the AI conversation. Our brain "does" nothing in the world. So it would seem to me about AI. If we're talking about robots, then it would be a lot clearer to unknowing people like me what we have to fear. What can AI be "in the world" if it's not robots? And, if AI is robots then say robots. The term AI, by itself, couldn't be more vague to me.

    2. jeffreycmcmahon

      Somebody could have made the same prediction in, say, 1955 about robots doing the same thing by 1980 and look at how that turned out. Neither history nor technology work this way.

  7. Pingback: A Kevin Drum sentence | Zingy Skyway Lunch

Comments are closed.