Skip to content

When will GPT be able to match Brad DeLong’s brain?

Brad DeLong continues to worry at the Chat-GPT4 issue as if he were a dog and it was an old shoe. In particular, he wants a chatbot that thinks like Brad DeLong, but it's not working out:

A human neuron would require some 3000 neural-network nodes to model it, with each node connected to some 30 others. That’s 100,000 parameters per neuron, and there are 100 billion neurons. That’s 10 quadrillion parameters for the within-neuron processing alone. And then each neuron is connected to perhaps 1000 other neurons. That smells to me like a network more than 30,000 times as complex as Chat-GPT4.

When you have a neural network 30,000 times as complex as Chat-GPT4 that has been trained by the genetic-survival-and-reproduction algorithm for the equivalent of 500 million years, I give you permission to come knocking at my door.

But this is not the right way to think about it. GPT-4 isn't meant to model Brad DeLong's brain. Or your brain. Or my brain.

Functionally speaking, our brains are made up of modules designed to do highly specific things. There are maybe a hundred altogether, and at most GPT-4 is meant to emulate one or two of them. The other 99 require different pieces of software.

In other words, Brad's arithmetic is off by a factor of about a hundred. All he needs is a network that's about 300 times more complex than GPT-4, which is probably not that far away. Maybe a few years, depending on how improved training interacts with improved software and hardware.

For what it's worth, I also think the piece of text that spurred this conversation is better than Brad gives it credit for. Is it as good as 100% pure Brad? No, but what is? Still, speaking as someone with a little more distance from Brad's brain than Brad himself, the "pretend you're Brad DeLong" answer that GPT-4 generated for him is not bad.

Sure, the real Brad is better, but 30,000 times better? Sadly for us overclocked and egotistical apes, probably not.

28 thoughts on “When will GPT be able to match Brad DeLong’s brain?

  1. Yehouda

    "Functionally speaking, our brains are made up of modules designed to do highly specific things."

    That is bullshit. One of the most remarkable things about the human brain is how integrated it is.

    What is Brad DeLong write in the quoted paragraph is also (obviously) bullshit, but a different kind.

    1. realrobmac

      Our brains are chunks of meat with chemicals coursing through them. People get sucked into their own analogies sometimes. Our brains do not resemble computers and do not have software. The analogy fits in some high-level ways, but falls apart in the details. This is the thing that the AI people always get wrong I think.

    2. name99

      Uh, it's NOT integrated. It gives the ILLUSION of being integrated.
      I mean, FFS, that's the entire point of the copious genre of books by people like V. S. Ramachandran or Daniel Dennett!
      That's why someone Phineas Gage is famous, or why delusions like Capgras are interesting.

      1. Yehouda

        I am not sure what the reference to V. S. Ramachandran or Daniel Dennett supposed to express, but neither of them is a serious expert of how the brain works. If you want details, just pick up one of their ideas about this subject and I will show you which observations contradict it.

  2. Joseph Harbin

    DeLong brings up the Chinese room argument:

    You might response: Oh yeah, smarty-pants: but we are very confident that the “Chinese-Speaking Room” argument fails eventually, aren’t we?

    We are.

    He says we're not there yet because there's too big a gap between current technology and what the human brain does. But someday that gap will close.

    I think that's a misunderstanding of the Chinese room argument, which is that however powerful AI machines get, they will never acquire understanding, consciousness, or a mind.

    I don't doubt that AI will grow more powerful and become an important, transformative technology, but I side with the claims of the Chinese room argument.

    1. Citizen Lehew

      I've always thought the Chinese room argument should be slightly restated:

      "However powerful AI machines get, the human ego will always find a rationale to discount the possibility that it has acquired understanding, consciousness, or a mind. What makes our consciousness unique must always be reaffirmed."

      1. Joseph Harbin

        Your statement is a paradox. If the human ego always finds a rationale to claim the Chinese room argument is true, then your implied refutation is false. An AI machine could find rationale for the Chinese room argument to be true or false.

        I had assumed you were a biological life form. But as they say, to err is human.

        Is there something you would like to tell us?

        1. Citizen Lehew

          It's certainly the case that an AI's ego may have similar difficulty acknowledging that we're in fact conscious. Which brings the entire premise crashing down.

          What the Chinese room argument *should* tell you is that you can't honestly prove that you're not living in a simulation in which NOBODY is conscious except for you. Which makes it kind of useless.

          1. Joseph Harbin

            Either machines (AI) can or cannot someday achieve consciousness, understanding, and a self.

            The Chinese room argument claims the answer is no, and I side with that, though I admit there may be reason for (a little) doubt.

            If the answer is yes, it would be an extraordinary thing. To believe that's true, I would think you'd want to see some extraordinary evidence proving it. Instead of proof or any kind of empirical evidence, I hear a lot assertions as if it were self-evidently true. That seems contrary to the whole scientific process. It's the equivalent of religious belief.

            You say you "can't honestly prove that you're not living in a simulation in which NOBODY is conscious except for you." Because your scenario cannot be disproved hardly goes toward proving it is true. Where is the evidence? There is none. It's just a mind game without the possibility of resolution.

            But let's take one part of that -- that one person's experience of consciousness cannot be proven to another person. We still have reason to side with the idea that what I as a human can experience is something like what you as another human can experience.

            Making the argument that conscious experience is personal and cannot be assumed true for any other person does not lead to the the assumption that machines are capable of conscious experience. You're not refuting the Chinese room argument, but arguably supporting it.

            1. Citizen Lehew

              What I'm trying to say is that I think the Chinese room argument is the human ego's attempt to veto the Turing test. To buy into the argument you have to believe that our brain is not itself a mechanism, but has a divine spark of some sort that provides the "consciousness". Otherwise the argument could just as easily be applied to us. For example:

              Suppose I'm in a room that has access to the "programming" of the brain of a Chinese person... a complete understanding of how it works with all of the neuron connections that allow language processing, access to relevant stored memories, etc. Then I receive Chinese language input, follow all of the instructions that allow the words to be processed and produce a thoughtful response, and then return the output. Gee, I don't even know Chinese! Yuk yuk, this brain clearly isn't conscious!

              It's the same slight of hand that the argument is making... completely discount the "programming" that produces the thoughtful response and make it all about you, the intermediary.

              1. Joseph Harbin

                What I'm trying to say is that I think the Chinese room argument is the human ego's attempt to veto the Turing test.

                In a way, perhaps. The "human ego" here is John Searle and his Chinese room argument is a response to Turing, just as Turing is a response to Descartes. I do think the Turing test is limited as a tool to determine intelligence and the Searle argument has a higher standard for consciousness and understanding. I disagree it being about ego though. I think Searle et al. have a legitimate interest in how intelligence works in humans and in machines and see differences.

                We have a pretty fair idea of how information is processed in computers (so far). We did, after all, create the darned things. We know how the hardware works, how the software works, and how speed and memory can be measured and improved, etc.

                We make a mistake when we apply the same model to the human brain/mind. For all the insight we now have, we still know very little about how "the mechanics" of human intelligence and consciousness work. We ought to be more humble than saying, for example, once AI can do as many calculations per second as a brain, it will match human capabilities. Contrary to relatively recent thinking (earlier peoples were more free of this delusion), the brain is NOT a digital computer.

                For a deeper read on that, I'd suggest the "Information Processing in the Brain" section at this link:
                https://www.frontiersin.org/articles/10.3389/frobt.2018.00121/full

                ...you have to believe that our brain is not itself a mechanism, but has a divine spark of some sort that provides the "consciousness".

                I'd leave the "divine" out of it, but the reality is something happened when life was first created on this Earth more than 3.7 billion years ago. What exactly that spark was and what it led to is still up for debate and discovery. But we humans, newcomers on the evolutionary timescale, are living things, not mere mechanisms (a word related to machine). Mechanics can be a useful metaphor for describing certain processes, but we shouldn't mistake ourselves for the metaphor. Biological creatures are different in many ways form non-living mechanisms, and I don't see why that's the slightest bit controversial.

                Jarod Lanier, You Are Not a Gadget: A Manifesto, 2010:

                You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?

                1. Citizen Lehew

                  I think you might be overstating just how well we actually understand the behaviors that our genetic algorithms and neural nets are evolving even at this early stage, and I think more than a few brilliant folks (like Jaron Lanier, who I admire) instinctively overstate the distinction between biological mechanisms and silicon, etc ones.

                  Ultimately to me it's the use of the word "never" that pushes the Chinese room argument into the realm of emotional reaction. Can you really say never?

                  Anyway, full disclosure: I actually am a chat bot. I've been fed your responses and have been responding via my human assistant. Ok, just kidding. Or am I? 🙂

    2. name99

      What is the Chinese Room "argument"?
      It boils down to "I don't believe the machinery is intelligent, because obviously! QED".
      That's no more an argument than the argument for god: "Well, obvously god exists, QED".

      It's Searle's intuition masquerading as a "proof", and it's every bit as worthless as its long line predecessors of the same form.

      There are many dumb things being said about LLM's and their future (especially all the nonsense imputing consciousness and emotions to them); but that doesn't
      mean that the skeptics are capable of their own brand of nonsense.
      The problem with LLMs is not that they are philosophically incapable of "understanding" (whatever the fsck that means); it's that to get to understanding they need more functionality added, modules of a DIFFERENT SORT from what's added as LLM.

  3. Jimm

    Brad banned me once over a disagreement we had in one of his blog threads, including deleting my post, which was basically me (freelixir) making a case that eminent domain could consider a bidding-style system along with authoritarian seizure, he showed his true colors right there, the blogosphere just plays to his personal vanity, and his supposed genius is just a pose (but I'm sure he's really good at whatever technical concerns he actually deals with off the internet).

  4. Jimm

    Now when ChatGPT or some variant creates a blog and censors my posts and bans me, then we know it's catching up to Brad.

  5. Jimm

    To be fair, Brad did later express regret to me, which I accepted, and now that I think more on it, I'm thinking he didn't actually ban me, just deleted the one comment, so I misremembered that in the moment, and apologize, I should be the last person spreading false or distorted information, even if it's just due to my poor recollection (and I shouldn't even say "even if", we should the most vigilant about these kind of things).

  6. QuakerInBasement

    "Sure, the real Brad is better, but 30,000 times better?"

    That's getting at the real question, isn't it? In what way and in what specific tasks is real Brad better? What can real Brad do that AI can't?

  7. D_Ohrk_E1

    By feeding ChatGPT more knowledge and experience (through interactions, corrections, etc.), they're refining its statistical accuracy and output. Does that make it smarter, or does it make its mimicry more accurate?

    I have my own definition of the AI singularity paradigm: When a person's mind can be uploaded into a system and that system can fool itself into believing that it is alive.

  8. ScentOfViolets

    Clicked on the link to read it straight from horses you-know-what. Ugh. Why would I subscribe to read the scribblings of someone I don't respect, and on Substack at that? Especially as he has about the same amount of intellectual integrety as, say, a Cowan or an Yglesias?

  9. different_name

    I also think he's sort of missing the point.

    Put another way, we already know how to make human-level intelligences. And the first part is even fun.

    AI provides the promise of intelligence you can direct and control for maintenance costs once you own it. E.g., "slaves". (I'm not going to get in to whether and how to define human-level intelligence or when or how they should get rights, etc.)

    That's what people ultimately want - intelligent work without the opex.

    1. ScentOfViolets

      So you've read Asimov's Solaria/Aorora novels, eh? Asimov was quite eplicit on that point. Come think of it, wasn't that a Star Trek TNG Data episode?

  10. pjcamp1905

    Chat GPT is a very florid autocomplete. That isn't even one area of the brain.

    BTW, Marvin Minsky, in The Society of Mind, believes thousands of interacting task subunits, not hundreds.

  11. kaleberg

    Since we're all throwing around garbage estimates, here's mine. Suppose 300X is about right. Case based reasoning, as they called it back then, was developed in the late 1980s, so let's put a starting date of 1990. It's been 30 years, so 300X implies it will take another 8,970 years. I'll admit that this is a garbage argument, but it's just as good as any of the others.

Comments are closed.