Skip to content

Here’s an update on AI that’s worth reading

Tyler Cowen links today to a (long) post about AI written by Alex Irpin, an AI researcher at Google. I recommend it for possibly vain reasons: As I read it, I was genuinely startled by how close it was to my own thinking. I swear, I almost felt like this guy could be my twin brother or something. My thinking processes and cognitive attitudes mimic his in an almost eerie way, and this made me trust his conclusions even after he left the realms I'm familiar with and entered areas way above my pay grade.

Anyway, Irpin is now more optimistic about AI than he was a few years ago. I especially liked this bit (which he wrote in 2020):

I suspect that many human-like learning behaviors could just be emergent properties of larger models. I also suspect that many things humans view as “intelligent” or “intentional” are neither. We just want to think we’re intelligent and intentional. We’re not, and the bar ML models need to cross is not as high as we think.

I believe human-level AI is approaching quickly because it relies on improvements in both software algorithms and hardware compute capacity. These are both advancing at exponential rates, and the combination of the two is advancing at a multi-exponential rate. Seven years ago I projected that this would produce full human-level AI by 2045, and that's more or less where Irpin is now.¹ But I've grown more optimistic, and today I'd put the timeline at around 2035, or maybe 2040. The amount of money and energy going into AI, along with spectacular increases in compute power, are staggering—far more than any of us were predicting even a few years ago. That just has to affect progress.

¹Though he also offers a 10% chance of human-level AI by 2028, which strikes even me as highly unlikely.

41 thoughts on “Here’s an update on AI that’s worth reading

  1. lower-case

    one thing i don't see mentioned very often is that the number of processor/IO operations required for running the model is a lot lower than that required for training the model

    'i know kung fu'

  2. drickard1967

    I dunno which of Kevin's pie-in-the-sky fantasies is more delusional... that AI will reach human levels in ~a decade, or that plutocrats will pay the taxes needed to provide government jobs/UBI to all the little people displaced by AI.

    1. Murc

      Right?

      Like, okay, there's some impressive work being done with image programs like Stable Diffusion. But I look at something like ChatGPT and think "you've set an enormous amount of money on fire, and used insane amounts of electricity, to produce something that can't answer basic math questions, which is what computers are for, and which makes you feel like you're talking to a deranged person the longer you have it respond to you."

    2. Bosh

      I think Kevin is just looking at AI in completely the wrong way. Making an AI that's a perfect replica of the human brain is not where we're going to end up any more than we ended up with technology that's a perfect replica of human muscles.

      Older tech can do certain things vastly better than human muscles can do and is utterly hopeless at other things that human muscles can do.

      Similarly "AI" will be able to do certain things vastly better than human brains and will be utterly hopless at other things that human brains can do.

      Expecting anything else is silly.

      1. realrobmac

        100%. The human brain is an organ that interacts with chemicals in the body in all kinds of ways. The idea that computer software will perfectly emulate such an organ or even teach us anything reasonable about it is beyond facile.

    3. lawnorder

      The matter of what the plutocrats will do or not do is very much determined by what the mass of the population is prepared to make them do. I don't personally remember the New Deal, but I can remember the Great Society. It may not be obvious but I think the social pendulum is now swinging to the left after recently having hit the right end of its swing and reversed direction. I expect that the plutocrats are going to find themselves having less and less influence on social policy over the next few decades.

  3. Murc

    The constant assertion of "we keep dumping a lot of resources into this, and so it has to produce results and sharpish!" is wearying.

    You'd think after all his predictions about self-driving cars fell flat, Kevin would demonstrate a little skepticism here.

    This especially is jaw-dropping:

    I suspect that many human-like learning behaviors could just be emergent properties of larger models.

    First, "human-like" is doing a lot of work there. Second, this is, essentially, a faith-based approach to technological development. It's saying "I believe that if we just make this model big enough and juice it enough, it will become a person, because those properties will just... emerge."

    Followed by a two-step of "I'm going to define the bar of 'person' low enough that at some point I can claim the standard has been met."

    1. lawnorder

      History says that if work is put into developing a particular technology, that technology is often, but not always, developed. We may not have economically viable flying cars yet, or fusion power, but look at the fairly new technology things we do have. Am I certain that human level artificial intelligence will be developed in the next few decades? No. Do I think it likely that human level artificial intelligence will be developed in the next few decades? Yes.

  4. cmayo

    Even if we supposed that it were true that humans aren't intelligent or intentional (that just read like techbro hypejargon to me... bro), all of this breathless waiting for an AI breakthrough ignores the fact that NONE of what is being reported on or worked on is more than just a large scale mimicry machine.

    I have yet to seen the tiniest mote or iota of evidence that AI can do things that are unpredictable, like a human can do. It can only do things that have been done before because it's not learning how to learn and grow, it's just learning how to predict what something should be based on mountains of old information.

    Unless or until AI can do something truly new (and I'm not talking about simply predicting the existence of something based on large scale pattern recognition, a la some biotech research lately), it will always be second best to humans.

    1. lower-case

      it took einstein 25 years to train his model before he became truly creative, so still early days for these systems

      anyway, there's this:

      In our paper published today in Nature, we introduce AlphaDev, an artificial intelligence (AI) system that uses reinforcement learning to discover enhanced computer science algorithms – surpassing those honed by scientists and engineers over decades.

      https://deepmind.google/discover/blog/alphadev-discovers-faster-sorting-algorithms/

      1. cmayo

        Yes, I'm aware of those and handwaved at them at the end of my comment. It's "AI"-enhanced discovery of things we already have a concept for. It's essentially just brute force applied to concepts we're already addressing. That's definitely very useful! But it's not AI in the sense that Kevin keeps referring to and being wary of.

        To use a crude analogy that I heard a podcast host mention in reference to AI recently, in talking about the inability of AI to replicate artistic expression: the giant theft-mimicry machines that are generative AI right now would not create something like the Family Guy scene (https://www.youtube.com/watch?v=5y36N9c9QVw) where the loss of a toy is used as tribute to the late Paul Walker (a piece of authentic, genuine artistic expression that connects with many humans) *unless a human had created it first* and it could mimic it.

    2. Doctor Jay

      Unpredictable is childs play. It's incredibly easy to be unpredictable. The hard part is being unpredictable and also useful and/or interesting.

      And when we look at that, there are also lots and lots of stuff produced by humans that is unpredictable and not all that useful. Businesses fail, paintings go in the trash bin, songs don't get listened to. There are tons of things like this.

      We forget about them very soon, because they are forgettable. Some people have been able to train themselves to make things that are memorable, that stand out, while still being familiar enough to connect with.

      Creativity is not the place to look. The place to look is "able to understand humans and what kind of reactions they might have."

      And no, we've seen nothing like this from the AI crowd.

  5. tomtom502

    Here is the definition Irpin uses:

    "An AI system that matches or exceeds humans at almost all (95%+) economically valuable work."

    95% seems like a lot. Here goes:

    Construction.

    Child care.

    Restaurant cooking.

    Auto mechanics.

    Home care.

    Medical assistant (not diagnosis, turning patients and emptying bedpans)

    Janitorial.

    All off the top of my head. 95% seems unlikely, ever.

    1. lawnorder

      The jobs you identify require, in addition to intelligence, the ability to manipulate physical objects. While this can be done and has been done by machines for a long time at the crude level of, for instance, a power shovel, the manual dexterity and carefully modulated force required to, for instance, change a baby's diaper without hurting the baby is difficult to build into a machine, and is an entirely different technological problem than data manipulation.

      I understand the mechanical dexterity problem is also being worked on although lately it hasn't been getting the publicity AI receives.

      1. tomtom502

        Exactly. Manipulating physical objects is not what AI does. It manipulates zeroes and ones.

        Vast numbers of jobs involve manipulating physical objects.

        Big layoffs of symbol manipulators kick in and we follow up by adopting more robots? Why would that happen?

        AI might be huge, but "matches or exceeds humans at almost all (95%+) economically valuable work." seems like a definition thought up by someone who taps at a computer all day and doesn't notice when he gets new tires there is a guy who changes the tires. And stocks the shelves. And unloads the truck. And carries out the trash. And puts up the signs. And changes the lightbulbs. And sprinkles salt after it snows. And convinces the customer to get the road hazard warranty.

        1. lawnorder

          AI, by itself, cannot replace humans in jobs that involve handling physical objects; as you say computers manipulate ones and zeroes; (I'm not sure that remains true in quantum computing, but "manipulate information" definitely remains true.) However, the mechanical dexterity side appears to be improving pretty much in tandem with computing capacity, which means that there's no reason to think the whole mechanical dexterity issue won't be solved by the time real AI happens. At that point, AI can be used to direct dexterous machines, leading to classical robots. THOSE should be able to replace humans in all the listed jobs.

  6. Leo1008

    This is my own favorite article on AI: “Cory Doctorow: What Kind of Bubble is AI?”

    It’s so similar to my own thinking, it’s almost like this guy could be a twin brother! As he puts it:

    “Of course AI is a bubble. It has all the hallmarks of a classic tech bubble. Pick up a rental car at SFO and drive in either direction on the 101 – north to San Francisco, south to Palo Alto – and every single billboard is advertising some kind of AI company. Every business plan has the word “AI” in it, even if the business itself has no AI in it. Even as two major, terrifying wars rage around the world, every newspaper has an above-the-fold AI headline and half the stories on Google News as I write this are about AI. I’ve had to make rule for my events: The first person to mention AI owes everyone else a drink.

    It’s a bubble.”

      1. Leo1008

        @lower-case:

        Indeed, your point is at the heart of the article:

        “AI is a bubble, and it’s full of fraud, but that doesn’t automatically mean there’ll be nothing of value left behind when the bubble bursts. World­Com was a gigantic fraud and it kicked off a fiber-optic bubble, but when WorldCom cratered, it left behind a lot of fiber that’s either in use today or waiting to be lit up. On balance, the world would have been better off without the WorldCom fraud, but at least something could be salvaged from the wreckage.”

        So what will we be left with from the AI boom in a few years?

        And, besides the billions of excessive (online or print) column inches devoted to the topic, what else will be its legacy?

        And will the pros outweigh the con artists?

    1. cmayo

      "I’ve had to make rule for my events: The first person to mention AI owes everyone else a drink.

      It’s a bubble.”

      This is my kind of person.

    1. MikeTheMathGuy

      I noticed that, too, and did some searching to confirm that he's real. (He is.) The initials and the tone of Kevin's breathless endorsement made me wonder if this was some elaborate inside joke.

  7. samgamgee

    The sad part of the pursuit of human-level intelligence AI and the billions spent is that every second we create four new people with this potential and can't be bothered to support them similarly.

  8. jeffreycmcmahon

    Sounds like we're just redefining "human-level AI" down to something closer to "average person-level intelligence" which is pretty low indeed.

    1. lawnorder

      I think a machine with an "IQ" of fifty or over qualifies as human level intelligence. It doesn't have to be intellectually equal to a smart person, especially since the machine's advantage on speed can, in many contexts, make it appear "smarter"

  9. jjramsey

    "I believe human-level AI is approaching quickly because it relies on improvements in both software algorithms and hardware compute capacity. These are both advancing at exponential rates"

    Hardware compute capacity has not been exponentially increasing for a while. Clock speeds for CPUs have largely plateaued, and hardware manufacturers have been making up for it by creating CPUs with multiple cores that run in parallel and repurposing hardware designed to run a lot of graphics-related instructions in parallel to run more general-computing instructions in parallel. While core counts are increasing, they aren't increasing at an *exponential* rate. Also, whereas before, we could fit more and more transistors in the same area of a silicon die, now we're starting to get to the point where miniaturization has diminishing returns and computing resources have to get physically bigger and consume more power in order to do more.

    Also, it's meaningless to say that software algorithms are "exponentially increasing."

    That doesn't mean that AI hasn't made impressive leaps, or that we shouldn't expect more of them, but progress in AI is not as certain or as straightforward as the hype would suggest. I'd say that in many ways, current AI is literally at a sophomore stage -- where it both does very impressive and very stupid things (sometimes at the same time) -- and it's unclear how to break out of that stage.

  10. pjcamp1905

    Sure.

    As long as by "human like AI" you mean "being able to complete text with the most likely next words according to what other people have written in the past."

    Large language models are just that -- language models. More compute power makes them larger, but still language models.

    I'll take the 2035 bet.

  11. Ogemaniac

    I wrote a pair of conference abstracts this week and for amusement threw them into a couple of the leading AIs and asked for a re-write to improve grammar and clarity. The results in all four cases were less than useless. They correctly understood that my sentences were long, so their solution was to break them apart. The result was shorter sentences that were technically grammatical, lacked any sort of flow or connectedness, and were scientific nonsense. I cannot for the life of me figure out how anyone can read their output without banging their head against a wall.

  12. ddoubleday

    I was a grad student in CS in the 80s. There were lots of AI researchers confidently predicting AI would take over by the end of the century.

    AI researchers are always confident the breakthrough is on the horizon. Haven't seen a reason to suspect this is any different yet. At least back in the 80s, we didn't have to worry so much about these researchers talking their book as part of a corporate hype train; most of them were academics.

Comments are closed.