Skip to content

Nvidia will sell you a human brain for $200,000

This is an Nvidia NVL72 Blackwell compute node:

Give or take a bit, it has the compute power of a human brain and is likely to be priced at about $200,000.

Seven years ago I thought it would take until 2040 to have something like this for a few thousand dollars. It's obvious now that it will be more like 2030 or so. Maybe sooner. Welcome to the future.

For now, I'll stick to 2033 as my best guess for full AGI at some kind of reasonable price (< $1 million). I'm not sure I even want to guess at when we'll have AGI at an unreasonable price. Five years? That sounds crackers, even to me, but....

60 thoughts on “Nvidia will sell you a human brain for $200,000

    1. Anandakos

      Donald Trump's, obviously. The programs that run on these "compute nodes" make up shit just as shamelessly and abundantly as does The Prevaricator in Chief.

      Expect ChatDJT any day now, courtesy of Trump Media Inc.

      1. iamr4man

        It occurs to me that if there was no Donald Trump and some tech company came up with a robot that acts like he does it would be considered a massive failure and we’d all be snarking about how silly Kevin was for thinking AI was going to replace humans in the workplace.

  1. lcannell

    It would help boost the credibility of your AGI prediction if you shared how ChatGPT (or something else) is helping in your life.

  2. rick_jones

    Compute node. Suggesting these things “travel” in groups. Water-cooled suggests plenty of power draw. So how much power does it draw/heat dissipate?

    1. emh1969

      BTW, snark aside, I have no idea why Kevin is comparing this to a human brain. Just did a quick google search and it appears he's the only one doing that. Not even the company itself is making that comparison.

      1. stevebikes

        A petaflop is 10^15 flops. Estimates for the human brain start at 10^14 flops, though many go with higher exponents, up to 10^23 or even 10^28.

        This is 8 * 10^16, so it's within the range, but at the lower end.

        1. kahner

          thanks, i had the same question. but i also wonder if measuring brain compute power in floating point operations even makes sense. the underlying computing "architecture" of brain is so different and more complex in inter-connectivity and functionality a simple flops vs flops comparison between organic brain and silicon chip seems inherently flawed.

          1. emh1969

            Yeah, that's kind of my point. But Kevin so desparately wants AGI to be a reality that he's willing to twist anything to fit his perspective.

          2. KenSchulz

            Yes,it sounds like someone’s wild-ass-guess about the computing power needed for AGI.
            Wovon man nicht sprechen kann, darüber muß man schweigen.

        2. Bobber

          FLOPS is a really strange choice for comparison. I can’t do 1 FLOPS, never mind 10^14, unless those floating point numbers happen to also be single digit integers.

        3. lawnorder

          When the exponent on your measurement has an error margin of plus or minus seven, you haven't meaningfully measured.

  3. Jimbo

    That's cool. However, there's no AI yet. It's all machine learning, not really thinking. Human level AI will be a machine that can see a flower, think it's beautiful, and think it because it believes it, and not because an algorithm tells it to think that.

    1. FrankM

      Human intelligence is also a product of learning. Doesn't everything you said about AGI also apply to humans?

        1. FrankM

          Well, they kind of do, actually. They just don't use a mathematical algorithm, but rather a sort of analog version. But it's not fundamentally different.

          1. Yehouda

            No, brains don't do anything like backpropagation. It just doesn't work with the way neurons in the human brain work.
            What they actually do is not obvious, but it is not backpropagation.

            1. FrankM

              Sure they do. Think more broadly. Backpropagation is just an algorithm for optimization. People do that all the time - they just don't do it mathematically. But they can conceive of a "loss function" of some sort, and "compute" the gradients with respect to the inputs (experiences). You start with the most important inputs and work through them, backpropagating (in a sense) to "learn" the optimum.

        2. lawnorder

          A science fiction writer once said that an intelligent alien would think as well as a human but not like a human. I have no idea how AI will work when it is achieved, but I'm pretty confident that it will be alien in that sense; we will have machines that think as well as, or better than, a human, but not like a human. I would also bet that they won't learn like a human.

          1. KenSchulz

            Agree, but ‘think better than a human’ is conditional. For tasks requiring recall of items from large sets, or combining multiple items of information, yes*. For problems that depend on acquiring data from meatspace, which usually entails meatspace time scales, AGI would be limited in the same way that humans are.
            *Once again, I plug one of my favorite papers, Robyn Dawes’ ‘The Robust Beauty of Improper Linear Models’. You don’t need AGI to beat humans at combining information.

            1. lawnorder

              "Better" and "faster" are by no means the same thing. There are humans who think faster than me; there are humans who think better than me. The ones who think faster are not the same humans as those who think better.

    2. jeffreycmcmahon

      It's not even "learning", it's just very crude imitation. All the work is going to make the simulation look more authentic instead of doing the hard work of making the simulation actually, authentically "intelligent".

      I think Mr. Drum is just going to keep moving the goalposts on this. It's useful to remember, as it states in his bio, that his background isn't in software, it's in _marketing_ software.

  4. SDSwmr

    What does AGI stand for in this context? A brief search only turned up "Adjusted Gross Income" and that's obviously not right. I can't follow the conclusion without knowing that term.

  5. Boronx

    Compute power of the human brain is probably grossly underestimated right now.

    It's doubtful we have a handle on all of the important compute processes of brain cells yet.

  6. geordie

    Wow! A supercomputer cluster from 2016 that was in the top 20 worldwide just went up for auction this week. It has about 6-7% of the performance of this and fills 28 computer racks.

    1. Ken Rhodes

      The first "supercomputer" I worked on, back in the early 1970s, had a trivial fraction of the speed, memory, and storage of my now [almost] obsolete iPhone 12.

      Time marches on ... at a frightening rate!

    2. rick_jones

      The Luddite purist in me doesn’t agree that non-shared-memory clusters are “supercomputers” …

    3. lawnorder

      I remember reading a story on the Cray 1 back when it was new. It was the fastest supercomputer you could buy at the time at 120 megaflops. The Blackwell compute node is almost a billion times as fast.

  7. NotCynicalEnough

    More than likely a fair number of these will be put to use doing things of great benefit to mankind like mining bitcoin or replacing bad customer service with worse, but cheaper, customer service.

    1. DudePlayingDudeDisguisedAsAnotherDude

      The crappy customer service that I've seen doesn't need a 200K compute engine to replace it. The most rudimentary PC will do.

  8. golack

    $200K....
    Yes, but if want those chips actually attached to the board, well that will be extra.
    😉

    1. name99

      With LLMs we have a reasonable model of System 1 (intuitive, can't explain how you did it) thinking. With traditional computing we have a reasonable model of System 2 (step by step, justified by logic at each stage) thinking.

      Both useful, but two elements are still clearly missing.
      The first is "common sense" control of the LLM/System 1 part. No-one expected that simply constructing a massive word prediction engine would capture so much of System 1 thinking, even so these designs do not capture elements of common sense so obvious that no-one bothers to even to write them down in the internets many corpuses of text. POSSIBLY we will pick up much to all of this via the extension of LLM techniques to additional modalities (images, audio, video, sensors) etc. Still an open question...

      The second is generic and robust techniques for having an LLM know when it should switch from System 1 behavior to System 2 behavior (ie hand the math problem over to Wolfram Alpha; hand the research question over to Google lookup). I'm not sure generic (ie scalable) solutions to this exist as opposed to adding many hardwired special cases one after the other.

      These sound (and are) negative; on the positive side, I don't think we yet fully know how much we can (and can't) do with current techniques.
      For example an LLM (and apparently the human brain *in system 1 mode*) only run forwards, there are no "backward loops". If you read Roger Penrose two books, Shadows of the Mind and The Emperor's New Mind, that's the essence of his point – you need backward loops for various complex behavior, but backwards also are what give you Godel and Turing incompleteness.
      Now, we are already faking (at a clumsy and sub-optimal level) backward loops in LLMs by changing the prompt to something like "Please work through the problem step by step"; but we may be able to do a lot better by providing backward branches deeper within the the neural network.

      Bottom line is: we simply do not know enough to make strong predictions.
      By 2040 we may have remarkable AGI; or we may simply have a much better version of Google Search. NO-ONE knows enough to be sure which of these is the end-point of 15 more years of the current trajectory.

        1. name99

          That's a remarkable reply, given that everything I said is basically a paraphrase of what Andrej Karpathy said at MS Build 2023...
          https://www.youtube.com/watch?v=bZQun8Y4L2A
          Or that I probably know more than any other person outside Apple about how the ANE works.

          But sure, feel free to imagine that I'm the ignoramus here. I don't expect tech-based comment on jabberwocking to be any more reality grounded than politics-based comments.

      1. DudePlayingDudeDisguisedAsAnotherDude

        Huh? "With traditional computing..."
        There's only one type of computing. Nothing new has emerged. And won't in the foreseeable future.

  9. dilbert dogbert

    Do Nvidia electronic brains dream of electronic sheep????
    How does one build into those brains all of the stupid human emotions that lead to human behavior and intelligence??? The World Wonders.
    AGI will be here when one has a mental breakdown.

  10. DudePlayingDudeDisguisedAsAnotherDude

    Bear with me here, all of you non Comp Sci folks...here's a theoretical language L:

    X-->V input a number into a computer cell
    V--> Y output a number from a computer cell
    V-->X+1 increment a number in a cell (there's an unlimited number of cells)
    V-->V-1 decrement
    IF V EQUALS 0 compare a cell to a zero
    GOTO LABEL jump to another point in a computer program/sequence

    ...the above is all -- and I mean *ALL* -- that computers are capable of. Nothing more. Nothing less. Can we map human (or animal intelligence) the above construct?
    When a parrot speaks a human language, does anyone think that the parrot *understands*?

    1. ScentOfViolets

      For that matter, all you need is the y combinator. That's one, count them up, one operation. Hard to get more basic than that.

    2. Boronx

      Since a computer can perform any algorithm, since nobody has shown that thought isn't an algorithm, nobody knows.

      Humans are the only animals that can carry out any algorithm. You might say all other animals are less powerful than a computer.

      1. DudePlayingDudeDisguisedAsAnotherDude

        "Since a computer can perform any algorithm"
        A computer can perform any algorithm that can be encoded using the above language. You can write an algorithm to enumerate all natural numbers (or integers); but you cannot write an algorithm to enumerate all real numbers. Just to give a simple example.
        More to the point though: can human intelligence be expressed using the above language? To put it another way, are we just a complex Rube-Goldberg machine?

        1. Boronx

          A human also cannot enumerate all real numbers.

          It's easy to create a computer that's better at it than any human!

          "can human intelligence be expressed using the above language?"

          Like I said, we don't know.

          If the only nervous system you'd studied was the nematode's, could you guess that similar components would be the basis for human intelligence?

  11. kaleberg

    Claiming this unit has the power of a human brain is rather naive. I am guessing the calculation assumes that brains have on the order of 10^11 neurons which can deal with 10^2 inputs each in with response times on the order of 10^2 times per second. That would get you a petaflop (10^15).

    Still, that's pretty conservative since response times are often faster than 10 ms, and there are a lot more connections than that, especially if one considers inputs and outputs.This also ignores glial cells which handle routing and learning. That estimate is likely at least two orders of magnitude low.

    More seriously, the brain is optimized for a certain type of computing involving processing real time data. Think of it as having specialized instructions for sequential memory, spatial processing, and real time sensor analysis and muscle control. This is probably worth another one or two orders of magnitude.

    I have no problem believing that a computer can do what a human brain can do, but I doubt we have more than a clue about what the human brain does and how it does it. Machine learning systems have to rederive processing paths that the brain has been optimized for over billions of years. Could a cluster of 10,000 or perhaps 1,000,000 of these boards do what a human brain can do? Possibly, but any serious software developer can tell you that the hard part of programming is figuring out what the program should do.

    More seriously, there is a fantasy that simply increasing the quantity of computational ability would let us build a "superintelligence", but there is no reason to believe this. It is simply an article of faith. There's no evidence that people with twice the brain mass are twice as smart. Again, the hard part is the programming. Machine learning has to be taught based on outside data and by evaluating its own responses, but this has intrinsic limits. We don't have an available superinteligence to provide us with a training set or sparring partner. Would we even recognize a superintelligence if we were build one?

    P.S. I should point out that the same faith that says all computing is Turing computing and allows us to believe human and machine computing are commensurate also places a limit on how smart any computer machine can be.

Comments are closed.