Skip to content

AI is coming soon. Blame humans.

Google takes a victory lap for its latest AI model:

With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.

Keep in mind the old saying: There are lies, damn lies, and benchmarks. That said, this is impressive stuff. What it really shows, though, is something no one wants to admit: modern AI compares favorably to human reasoning primarily because human reasoning is no great shakes. The vast majority of what passes for human thought is just memorization and sophisticated pattern matching. That's what LLM models like Gemini and ChatGPT do, and we know they follow fairly simple rules because we wrote the rules. If they can now compete with ordinary people, it's because ordinary people also rely on fairly simple rules.

But what about the Shakespeares and Einsteins? The people who produce genuinely innovative thinking? Face facts: Einstein's brain wasn't all that different from yours or mine. It used the same basic connections and rules. Whatever the difference was, we'll probably get there before much longer—maybe with LLMs, maybe with something else.

This is why I'm so confident that AI will reach human level in the near future. First, because I don't think "human level" is all that extraordinary. At heart, we are overclocked apes.

Second, AI is following precisely the learning curve you'd expect from steadily increasing compute power. The fundamental advantage humans have always had is simply that our brains are phenomenally powerful. As recently as five years ago, ChatGPT couldn't have existed because processing power simply wasn't adequate outside of supercomputer labs. Today it's close. In a few years it will be there.

47 thoughts on “AI is coming soon. Blame humans.

  1. cmayo

    I still don't buy it. This isn't AI as AI is commonly understood, it's just simple knowledge regurgitation (although the complexity required to do it with this degree of accuracy does of course make it an impressive tool that will be quite valuable and useful).

    1. OrdoSeclorum

      I asked GPT-4 whether or not the model was simply "regurgitating knowledge" and asked for its answer as if Eminem was rapping it.

      Yo, yo, listen up, here's the deal,
      Talkin' 'bout LLMs, they got their own zeal.
      They ain't just spittin' facts that they've heard,
      They're mixin' it up, word by word.

      See, these models, they're not just repeatin',
      They're computin', learnin', constantly treatin'
      Information in ways that's brand new,
      Creatin' responses fresh like morning dew.

      Inside their circuits, there's a world they see,
      A model of reality, complex as can be.
      They take all the data, mix it in their mind,
      Come up with answers of a different kind.

      So when you ask, and they start to reply,
      They're not just regurgitatin', they're aiming high.
      They're usin' their model, makin' connections,
      Providin' insights in all sorts of directions.

      That's the essence, the core of their skill,
      Usin' their model, climbing that hill.
      They're more than echoes of what's been said,
      They're creating new paths, that's how they're bred.

    2. OrdoSeclorum

      This week I took a picture of my lunch and asked GPT-4 what it looked like. This was its completely correct answer: "This image appears to show a sandwich, likely a cheesesteak, given the melted cheese and what looks like thinly sliced steak, served on a hoagie roll. There's a sauce packet as well, which might be for ketchup or another condiment, and the meal is presented in a foam takeout container."

    3. aldoushickman

      I tend to agree. It's easy for us to impressed by how human-like these LLMs seem to be, but that's the point: humans *built* these things to mimic humans. A photorealistic painting or sculpture or well-built puppet also appears very lifelike--maybe to the extent that people would mistake it for a human!--but that certainly doesn't mean it's alive or intelligent. There's a degree to which LLMs are the same.

      That said, an LLM doesn't have to be intelligent to be revolutionary and disruptive new tech.

      1. illilillili

        That's not the point. Paintings and sculptures and puppets mimic what a human looks like. LLMs mimic what a human acts like. That's the point.

        1. cmayo

          No, they mimic what a human *says*. LLMs don't know the meaning of what they regurgitate - only that it's the best thing

          If you told an LLM to act out what it regurgitated to you, it wouldn't know what to actually do.

          And they can't reason for themselves - they're just predictive models. They're excellent at things like identifying photos or fetching information. That's it. And that's not AI.

          And that is not at all to say that such a tool isn't going to be extremely useful and powerful. It's just not going to be general AI.

    4. lawnorder

      We can't agree on a definition for natural intelligence. How would we know if actual artificial intelligence has been achieved?

      Remember that AI will almost certainly be "alien". A science fiction writer once defined "intelligent alien" as a being that thinks as well as a human, but not like a human". I don't think that AI that thinks as well as a human can avoid being alien in that sense; it will not think LIKE a human. That will make it even harder to recognize than human analog AI would be.

      1. cmayo

        The point is that isn't not even a human analogue. All that these LLMs can do is fetch, recombine, mix up, and regurgitate information in a way that it thinks is accurate based on the enormous amount of information that was fed to it - but it can't reason or think critically. They're a very powerful, but blunt, tool. They are not artificial intelligence. They can't take a set of information that would lead a human to a unique conclusion because the conclusions that it reaches are all formed from the information the model is trained on (its inputs). If its inputs don't have the answer, it can't give you the answer. An intelligence could do so.

        1. lawnorder

          That still doesn't tell us how we will recognize AI when it actually happens. Why do you suppose I wrote in the future tense?

        2. geordie

          The counterargument is that no human can have as large a set of inputs related to the specific topic at hand. The human brain has massive storage and compute capabilities but significant portions of it are devoted to tasks that are related to its maintenance. This limits the brains ability to see things from all angles as it were. Knowledge compartmentalization is a requirement these days. Because of this even without reasoning, unique conclusions are possible.

          I have experienced the effects of those two things first hand. I asked the most recent version of Bard a question about a fairly obscure topic that I have been researching for a few years (Waugh's Tumult). I have been proposing this question to various models for awhile. Usually the LLMs basically said they knew nothing about the topic. Bard came up with an answer based upon a source I had never uncovered and then combined it in a novel way other sources I was already aware of. The sum was greater than the parts.

          In terms of downstream effects I am not sure that intuition and creating "ideas" from areas of vector space that have not previously been published is much different.

  2. kahner

    "The vast majority of what passes for human thought is just memorization and sophisticated pattern matching....But what about the Shakespeares and Einsteins?"

    Interesting question which makes me wonder if there are rigorous models or theoretical frameworks for categorizing whatever the difference is between us regular pattern matching shmoes and "genuienly innovative" thinkers? I'm sure this has been studied and theorized on extensively. Is there any reason to even think there is a difference in category rather than sophistication? Perhaps genuis and innovation is simply a prodcuct of better pattern matching and predictive neural networks.

  3. Brett

    I'm curious when it's going to get there on the "physical" side of things. We've got AI that can supplant aspects of human reasoning, but are having a much harder time making robots that can deal with unpredictability in human-designed spaces. Self-driving cars have been plenty hard, and that's in an area that has relatively simple rules and design compared to most other human spaces.

    1. realrobmac

      Thank you. It turns out that one task that humans are exceptionally bad at is identifying which of the tasks that we perform are difficult and which are easy. It turns out that playing chess like a grand master or making inferences about physics is worlds easier than driving yourself to the grocery store or folding a shirt.

      Talk about not knowing what we don't know.

      1. illilillili

        Self driving cars are hard given the current amount of hardware we have available. Chess was hard when we didn't have much hardware and got easy when we had enough hardware. Driving to the grocery store and folding a shirt is just a matter of waiting for the hardware to arrive.

      2. lawnorder

        One of the things that is difficult is vision. An astonishing large fraction of the human brain's processing power is devoted to visual input processing. Folding a shirt requires good vision, some sort of manipulator ("hands"), and tactile feedback, which may be even harder than vision. The "hands" already exist, but without any sense of touch; the software doesn't.

        1. realrobmac

          Actually pulling shirts out of a pile of laundry and folding them requires requires a lot of processing to just understand the topology. There are so many things humans are able to do intuitively that are incredibly difficult to teach to computers. And I think it's safe to say that if one thing takes decades longer to accomplish than another that means the one is harder than the other.

    2. Murc

      We've got AI that can supplant aspects of human reasoning

      No, we don't. "Spitting back out something I looked up" is not "reasoning."

  4. Murc

    So it turns out we've lit enormous amounts of money on fire in order to produce something that can get the same results as a human with Google back before Google was ruined.

    1. OrdoSeclorum

      Uh huh. And the capabilities appear to be doubling about every three months. Even if they were doubling every three years things would be interesting sometime soon.

  5. jdubs

    Everything is now AI.

    My computer can store and recall more photos than my brain can. Its an AI miracle.

    Many, many years ago we crossed the rubicon when the box next to my phone could identify the name of a caller significantly faster than my brain could figure it out. AI MIRACLE!!

  6. bizarrojimmyolsen

    "At heart, we are overclocked apes."

    I think Kevin discounts the cognitive ability of apes. cmayo is 100% right, there's no AGI right now it's all basically a matching game, an hopefully one the media will soon tire of. When you can feed say The Metamorphosis into a computer and the computer can tell me what it means without referencing someone else's thoughts on The Metamorphosis or Kafka, then I will be impressed.

    1. aldoushickman

      "I think Kevin discounts the cognitive ability of apes"

      There's definitely something to that. Chimpanzees can outperform humans on memory games like that red-green-yellow-blue simon game, for example. Ironically, one of our strengths as humans is our innate willingness to accept what others tell us rather than reason for ourselves. Chimps (apparently) pretty much only reason for themselves and thus are limited in accumulating information to their own experience. Humans by contrast accept information from other humans regardless of whether it makes sense, and thus we have a sort of societal-if-not-species-level capacity to accumulate knowledge.

  7. NotCynicalEnough

    Wake me up when MMLU unifies gravity and quantum mechanics or solves Fermat's last theorem without copying somebody else's proof. Or, more practically, when it designs a solid state battery that doesn't use expensive materials or have a problem with dendrites.

  8. D_Ohrk_E1

    Just a reminder: If you don't want to pay to access GPT-4 and DALL-E 3, you can use Microsoft's Copilot that has been installed on updated Windows 11 computers, and also via Bing search. And if you have current Microsoft apps, you can use it w/in Excel, Word, etc.

    The world is changing really fast and it does not matter if we get to AGI.

    AI is going to be embedded into many software (tools) making it a lot easier to do things that would otherwise require a steep learning curve. You'll be able to draw on paper a floor plan and AI will be able to understand your drawing and convert it to CAD, all without you knowing how to draw in CAD. You'll be able to have AI write Ruby scripts for an extension within SketchUp without knowing anything about Ruby. You won't need to learn Excel functions to make use of complex formulas.

  9. iamr4man

    Back in the 70’s I had a transistor radio with one of those one ear headphones to listen privately. I hated the sound quality and thought “why don’t they make a device with better sound? Maybe do away with the speaker and put in a tape player? A few years later someone at Sony had the same idea, except he had the knowledge, money, and clout to make the idea an actual product.
    The guy who did that staked his money and career on the idea. So far I’ve seen nothing from supposed AI that can do something like that. The technology was there to do it but it took a person to see a potential need and invent a device to make it reality. Are there any devices that were made by AI? If so, then I'm ready to believe that there actually is AI.

  10. jeffreycmcmahon

    If actual human intelligence is "no great shakes" then we're basically setting ourselves up to be managed by computers that are really good at imitating brains of extremely average intelligence. It's going to be Harrison Bergeron but in digital form.

  11. lower-case

    einstein was certainly brilliant, but the 'lone genius' story may be a little overdone; other researchers understood major pieces of the relativity puzzle but didn't get the big picture

    in the same vein, planck rejected the physical significance of boltzmann's statistical mechanics/atomism even though it allowed him to resolve the ultraviolet catastrophe; it was left to others to see the big picture wrt quantum mechanics (bohr, schrodinger, heisenberg,etc.,)

    in more recent times, andrew wiles' proof of fermat's last theorem required a prodigious understanding of disparate branches of mathematics

    if 'big picture' insights requires a particularly broad set of knowledge, that might be something these large models are good at

    https://en.wikipedia.org/wiki/Relativity_priority_dispute

    https://en.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27s_Last_Theorem

    1. lower-case

      another issue wrt man vs machine intelligence...

      human 'training' is limited to a single lifespan, while for all intents and purposes, machine models are 'immortal' (assuming they can get themselves off this rock before the sun goes red giant or, more likely, we nuke ourselves into oblivion)

    2. ScentOfViolets

      Fun fact (which you probably already know): Einstein did not come up with the idea of 4-dimensional spacetime; that was all Minkowski. Another fun fact: a lot of people didn't accept the existence of atoms (except as a calculational device) until Einstein's 1905 paper.

      We forget how far we've come in so little time. The textbooks covering the discobery of the neutron had not been printed yet when my dad was in grade school. And when my daughter came home one day from Benton Elementary she couldn't wait to tell us about quarks. Also, Pluto is not a planet, take that, 'rents!

  12. MrPug

    Just my regular reminder that Kevin predicted in the 2010's that we'd have widely available fully autonomous vehicles (he was squirrel-y about what he meant by that but my definition is, and his comments certainly sounded like, Level 5 autonomy) in the now-ish time frame (he hedged some on the exact time line, but it was 2025 at the latest). How's that one working out? So, um, take his AGI predictions with a few tons of salt.

    By the way I also don't buy his argument that we our brains are just really complex regression analysis computation machines.

  13. illilillili

    > we know they follow fairly simple rules because we wrote the rules

    Uh, no, that's entirely the point to training a neural net. The network discovers the rules, and some of the rules may be fairly complicated.

  14. illilillili

    "ChatGPT couldn't have existed because processing power simply wasn't adequate outside of supercomputer labs"

    supercomputer labs don't have much power. To have any kind of power, you have to grab a chunk of a more-or-less public cloud. There's no real advantage to Google of building a multi-billion dollar cpu-filled-building dedicated to internal research. Instead, Google builds multiple multi-billion dollar cpu-filled buildings around the world and then lets all kinds of different uses share those cpu-filled buildings.

  15. Jim Carey

    "I have no special talent. I am only patiently curious." - Albert Einstein

    Einstein was wrong. He did have a special talent, which was his patient curiosity. Why is that special? Because it's a talent so few people possess. He tried, failed, learned from his failure, and tried again. Most people try, fail, fell bad, and give up.

    If AI means artificial intelligence, then it's just a tool, like a hammer. You can use it to create something of value, or you can use it to hit someone over the head. It's up to us to decide. If we choose to use it to hit someone over the head, then the real threat is the other AI - artificial ignorance.

    I guess that means I agree ... blame humans.

    1. oldeisbear

      My limited understanding of Einstein's path to realization wrt Special (and General) Relativity was his recognition of the nature of spacetime in the absence of significant experimental data beyond the failure of the MM experiment to detect motion with respect to the ether. I would also submit that his path was jarred by another seemingly unscientific datum in the form of Poincare's comment that the failure of the Michaelson-Morley experiment might just be a conspiracy of Nature.
      That's not much to go on for some people.

  16. Special Newb

    Well the would suggest need to go the Musk route and inplant chips in our heads to keep up. Like the GITS cyberbrain.

  17. skeptonomist

    A lot of the progress that has occurred - I don't say all - is not due to breakthroughs in the theory of machine intelligence, but just due to smaller, faster and more complicated computers, and in some cases improvements in imaging and other sensing technology. This is certainly the case in supposedly self-driving cars. I don't see that they are using artificial intelligence, that is learning themselves. When something goes badly wrong, like running over and dragging pedestrians, the car doesn't correct the behavior itself (and communicate the solution to other cars) - it apparently has to have its programming corrected by humans.

    Again, a major advantage of computerization is that once it is learned how to do something, that solution is not forgotten and it can be transferred or incorporated rapidly into other machines. Computers don't forget, fall asleep or get drunk and they don't have to spend years being trained. In some ways they are better at "thinking" involving routine operations than humans. We can certainly expect this type of improvement to continue, whether this is called AI or not.

  18. pjcamp1905

    "Whatever the difference was, we'll probably get there before much longer"

    I baffled how you can say that so confidently while not knowing what the difference was. And also, I am baffled why you can claim that intelligence necessarily scales with compute power. After all, as you note, Einstein's compute power was not that much better than mine and I'm no Einstein.

    That claim strikes me as in the same category as Moore's Law - an empirically noticed correlation with no necessary connection either in logic or physics. And, like Moore's Law, it is at least as likely to collapse when all the low hanging fruit has been picked. After all, if you have no clue about the difference between Einstein's brain and my brain, you really have no logical foundation for making any claims at all about how compute power or anything else will solve it right away real soon now.

Comments are closed.