Skip to content

AI is almost here

I wrote this about artificial intelligence in 2017:

Even if Moore’s law slows down or stops, the total power of everything put together—more use of custom microchips, more parallelism, more sophisticated software, and even the possibility of entirely new ways of doing computing—will almost certainly keep growing for many more years.

....We’ve finally built computers with roughly the raw processing power of the human brain—although only at a cost of more than $100 million and with an internal architecture that may or may not work well for emulating the human mind. But in another 10 years, this level of power will likely be available for less than $1 million, and thousands of teams will be testing AI software on a platform that’s actually capable of competing with humans.

The custom microchips turned out to be (for now) Nvidia graphics processors. Increased parallelism takes the form of cloud computing. The software is transformer-based large language models. And the processing power of the human brain is on the order of 10-50 petaflops—far less than $1 million worth of Nvidia's latest chip provided via a cloud network.

I'm just bragging here. Sorry. It's just that at the time I first started writing seriously about artificial intelligence (in 2013) I had to put up with endless pushback to the prospect of AI ever existing, let alone in the near future. The idea that our main problem was raw compute power seemed laughable. But as near as I can tell, my only mistake was being too conservative. Five years ago we didn't have the software for today's chatbots, and it didn't matter since we had nowhere near the compute power to make them work. Today we have both. Progress has been spectacular beyond belief.

This is why I now think true AI will be here around 2030-35 rather than 2040. After decades of disappointment, AI is now exceeding expectations. There's very little reason to think this will slow down anytime soon.

34 thoughts on “AI is almost here

    1. dydnyc

      There’s actually a lot of reasons to think it will slow down and these are apparent when you write software against these services every day, which I do. No one is talking about how the model has degraded since GPT 4 in the press, buts it’s widely known among people who use it. Increasing the context of the model leads to degraded performance as the matrix gets very very large. The solutions to these problems are old technology like vector databases and better search. The way in which you need massive compute and electricity to solve elementary problems give n its lack of a coherent model reminds me of bitcoin. I wish you were working with large text documents and pushing demanding loads onto Claude or gpt 4. I think you’d loose your smile.

      1. cmayo

        Sure, but that's not what Kevin's been crowing about all these years.

        Also, on a broader scale (admittedly not on a single person scale), advances in technology result in different jobs - not fewer jobs.

            1. Yikes

              I would also like to see Kevin's definition, maybe he already posted it but the phrase "AI" is being used so much now its meaningless.

              We have had, sheesh, I don't know "first level AI"(?) for years -- a great example is getting the address and directions for a business.

              We are sort of barely at "second level AI" an example of which would be voice recognition software good enough, for example, to eliminate waiters by having a simple speaker at a restaurant table. One can already dictate into a cellphone.

              But in any event "second level" starts to run into the problem which I think is going to slow down "AI" much more than people seem to realize: to put it bluntly - what is the question? what is the task?

              Most conversations, which, say chat models are close to replicating, depend on the person accurately assessing the problem enough to ask the correct question. We are nowhere near that. Nowhere near.

              And I don't even see a path to it at the moment.

        1. Dave_MB32

          In general, you're right that technology brings different jobs and not fewer jobs. AI will bring different, yet fewer jobs. The nature of it is such that it does a lot of the grunt work.

  1. emh1969

    Today I fed about 200 words of text to Chat-GPT and asked it to summarize the infomation using a maximum of 35 words. The response I was given had 47 words. This was just one of many simple failures that Chat-GPT made for me today.

    But yep, AI is almost here. Sorry Kev, but I'm going to keep lauging at you...

    1. m robertson

      20 years ago, the on-screen keyboard was non-viable. today you are using it to assert that technology has reached its apex

  2. Doctor Jay

    The biggest problem with this sort of prognostication is definitional.

    It is nearly impossible to articulate an operational definition of AI. And by "operational" I mean a definition that can be evaluated without a lot of judgement involved.

    When I was in CS grad school in the early 80's, I didn't really study AI but I was aware of the things they were doing then, which were maybe not all that similar to what is happening now.

    One of my friends in grad school remarked that "once we know how to do something, it stops being AI" - which speaks to how poorly the whole thing is defined.

    Also, if AI is so great and so close, how is it that the completion algorithm on my "smart" phone has got very noticeably worse lately. It used to be able to turn "dont" into "don't" but now it won't.

    1. aldoushickman

      I think that the definition question is pretty key. If we mean by AI like, a synthetic human-or-greater mind, then I don't think that we are all that close. If we mean by AI a system that can do a lot of things that up until recently you needed a human to do, more or less as well as a human can do them, then I guess we are pretty close.

      Lower-case's point above is a good one, though: something needn't be a machine mind (or skynet or ultron etc.) to be disruptive and dangerous. Very complex and powerful new systems can easily do very bad things we didn't predict. Facebook (and other tech companies) created a system that allowed it to increase engagement and microtargeting of ads/media content. Which, arguably, led to the Trump campaign having enough of an edge that it prevailed in 2016, and provided fertile grounds for all the qanon weirdos to multiply and get weirder. Doubtless, that was never Facebook's intent, but that sort of tech development changed the way the game was played in ways that are testing whether or not the game has longterm viability.

      So, who knows what impact LLMs will have? Probably more on the scale of the cell phone than the printing press, but they are likely cause a lot of change, and it's likely that a lot of that change will be bad, despite LLMs not being remotely close to a synthetic mind.

      1. kahner

        even "human-or-greater mind" isn't well define. greater at what, measured how? and though i see a lot of risk with whatever type of AI develops, i also see a lot of possibly major upsides with types of AI and most people wouldn't call AGI, but are greater than humans at lots of important things, medical research, diagnosis an treatment being one much touted use case.

  3. pipecock

    “AI” as in “computers that are smarter than humans and better than them at complex tasks” is still never happening. “AI” as in “machines good enuff at things that companies will further the general enshittification of everything by allowing them to do a worse job than humans would just because they’re cheaper” has been here for a while so…

    Maybe less tooting of one’s own horn unnecessarily, eh?

    1. MattBallAZ

      >“AI” as in “computers that are smarter than humans and better than them at complex tasks” is still never happening.

      Never is a long time, and this implies something "magic" and non-physical about the human brain. I sure wouldn't bet on that.

      1. aldoushickman

        Human brains are constructed of matter, so we know for absolute certainty that it is possible to build a machine that can do the same thing as a human brain, even if we don't know how.

        And, since we know (a) that some human brains are cleverer than others, and (b) there are all sorts of non-human brains that are much less clever than those of humans, it's a pretty safe bet that since it's possible to build a machine that can do what an average human brain can do, it's also possible to build a machine that can do what an above-average (maybe far, far above average) human brain can do.

        So yeah, never is a long time.

        1. realrobmac

          Human brains are organs. Indeed that makes them matter. But they are part of an complex system in the human body and are not some standalone machine. Frankly I think we'll have a fully functional artificial spleen before we have a fully functional artificial brain.

          1. aldoushickman

            The point isn't that a brain is easy to make, it's just that anybody claiming that it is impossible to make a thinking machine has to confront the fact that there are billions of thinking machines walking around already. We don't know how to fab one in a factory (or any other way aside from the old-fashioned), but those claiming that it is impossible are plainly wrong.

            But otherwise, totally agree: brains are complicated, and more complicated than spleens.

    2. jambo

      “Allowing them to do a worse job than humans…”

      As someone who has called customer service only to be connected to some low paid, low level worker obviously following a flow chart to respond to my questions because they don’t really know anything about the product I don’t think it’s a stretch to think AI will actually do a better job than humans in the near future. It won’t be better than ANY human, but it will certainly be better than the ones most companies hire these days.

  4. ronp

    I will be impressed when cancer or physics problems are solved by AI minds. I am doubtful that will ever happen. Maybe smart humans will direct the AI to do it though.

    1. geordie

      If we only define AI as having arrived when it can make novel breakthroughs that no person yet has, the goals have shifted a lot. By that definition probably 99% of humans are not intelligent.

      However, setting that aside, although, I understand that you are talking about discovering or proving new concepts in physic, arguably AlphaFold was solving a physics problem. https://deepmind.google/technologies/alphafold/ can predict the shape of a protein, down to atomic accuracy.

      It's not a LLM or a general purpose AI by any stretch of the imagination but if a specialized "AI" such as that can provide an answer to any problem within its niche, then it's just a matter of stitching more and more of those AIs together behind a natural language interaction layer. 10 years ago many of the pieces to to do that were still TBD. I don't think that is true any more. It will take a little while before people stop "seeing everything as a nail to be hammered" with LLM though.

  5. Kit

    The current generation of AI is jaw dropping, and still looks to have serious head room. Physics promises at least a few generations of semiconductor advances. Current hardware shows no sign of stagnating. Advances in practical applications seem to arrive on a daily basis. And basic ideas continue to pour out of researchers. But who can say when the current state of the art will hit a wall? Self-driving cars went from nothing to ‘any day now’ and then mostly disappeared from the headlines.

    I rather doubt that anyone on earth could outperform an AI on a test of one hundred random questions. And yet when does GPT reach beyond a mere great summary of other material? AI feels hollow when push comes to shove. Ask GPT about Kevin and his views, and it spits out an answer that I could only improve upon with significantly more time. And I feel I’m being generous when I say that GPT gives better results than 95% of Kevin’s commenters.

    I believe (and frankly want to believe) that humanity’s best 1% touch something beyond GPT’s reach. But most of us, most of the time, offer nothing better than the regurgitations of LLMs. If AI crashes through its current ceiling with another major advance, I can only wonder how we might grasp at fleeting notions of superiority. But not yet.

    1. bluegreysun

      “…AI feels hollow when push comes to shove. Ask GPT about Kevin and his views, and it spits out an answer that I could only improve upon with significantly more time. And I feel I’m being generous when I say that GPT gives better results than 95% of Kevin’s commenters…”

      I agree the ChatGPT output feels oddly empty and… idk, generic? But I also agree it can often respond to specific questions in a highly specific manner, in a way that looks like “understanding” (though I’m not saying that it is)… and does so in a way that’s often better than people I routinely communicate with.

      And I have seen examples where its writing is *clear and concise* in a way I’m not. Stylistically, I think it’s a better writer than I am.

  6. SwamiRedux

    Didn't we have this discussion about self-driving cars as well? This was a few years ago, when I was active on these fora. Lost interest with that weird comments system they rolled out, never came back.

  7. pjcamp1905

    You do realize that LLMs are not AI? In the AI field, they are often described as stochastic parrots.

    You've been an engineer. You should realize that things always move fast while you're picking the low hanging fruit. You also predicted we would all have self driving cars basically today.

    Back propagation is an amazing algorithm that can do amazing things. But being able to statistically predict the likely next word in a sentence based on remembering what others have said is not AI. It isn't even I. All the I here is in the programmers and the trainers.

    1. Dave_MB32

      You're describing machine learning. AI doesn't require the computer to learn it on its own. EU is requiring ny AI that will affect people is guided or trained AI.

  8. illilillili

    > The custom microchips turned out to be (for now) Nvidia graphics processors.

    The custom microchips are also tensorflow chips: https: //en.wikipedia.org/wiki/Tensor_Processing_Unit

Comments are closed.