Skip to content

ChatGPT is not AI

Atrios today:

Calling those programs "AI" is only slightly less stupid than thinking a link to an ape.jpg is worth millions. I think it's losing — by giving it too much credibility — to get into arguments about why it isn't in any sense "artificial intelligence." But it isn't! Everyone knows it isn't!

It's funny that he mentions this. I've been fruitlessly making the same point for years in the face of overwhelming marketing campaigns that insist every product is "powered by AI" or some other nonsense. But even the most advanced examples of smart computers—Watson, Deep Blue, etc.—aren't artificial intelligence or anything close to it.

But last year I finally gave up. The onslaught of ChatGPT, and the admittedly amazing things it could do, led to a universal belief that it was AI. At some point, you just can't turn back the tide. So fine. It's AI.

But it's not. Even if you have an expansive definition of AI, ChatGPT isn't. It's just another step along the way.

33 thoughts on “ChatGPT is not AI

  1. kahner

    I think this is very wrong. AI is the ability of a computer to do tasks that are usually done by humans because they require human intelligence. ChatGPT and other LLM's can do this and can be leveraged as a tool in larger systems to do even more. The fact that they do these tasks through an underlying tech that doesn't "think" in any way that corresponds to human reasoning is irrelevant to whether they can perform tasks that used to require human intelligence. I believe most people don't understand the way the tech works at all, that it's not "thinking" but making statistical predictions based on training data, and that AI might not be the best term to describe it. But by the common definition of AI, this is def AI.

    It still amazes me how for some people AI is forever anything that computer can't yet do, no matter how advanced what they can do becomes.

    1. Joseph Harbin

      AI is the ability of a computer to do tasks that are usually done by humans because they require human intelligence.

      I don't know why this standard would apply to LLMs and not your everyday desktop calculator. In each case, machines take input, apply rules, and provide responses. Neither apply "thinking," as you acknowledge. There are a lot of tools (software, for one) that perform tasks that used to require human intelligence but that doesn't make them AI.

      I believe most people don't understand the way the tech works at all...

      Most people don't understand how the "tech" behind toilets works, but no one calls it "intelligent plumbing." And if they did, we'd understand it's empty hype.

      1. kahner

        I think it does apply to the calculator. CAltulators are, in my opinion, a very basic early for of AI by my definition. In fact, the term calculator was once applied to the job of mathematician or person who performs math calculations. Then we made electronics ones and it because a "simple
        tech thingy", not AI, even though initially it was revolutionary. And over the years this happens again and again and the goalposts are moved to what AI is. Beating the greatest chess grandmasters, beating the greatest Go players, beating the turning test. At one time these were all held as, at least informally, standards to "true" AI. And as each goalpost is passed, it is quickly moved further down the field. And honestly, comparing the cutting edge of AI to a toilet in this context is just silly. But I really don't know what definition of AI people are using if systems like ChatGPT don't fit it.

        1. Joseph Harbin

          Before there were calculators there were mechanical adding machines. They go back as far as Blaise Pascal in the 1600s.

          No one called adding machines or calculators AI. You can if you want to, but then you'd be the one moving the goalposts.

          1. kahner

            well, considering the term Artificial Intelligence was coined in 1956 it would be pretty surprising to find someone in the 1600s using it to describe anything. they also probably didn't call light an electromagnetic wave for quite a while, but i'm pretty sure it still actually was one.

  2. clawback

    No, the term artificial intelligence refers to systems that attempt to mimic human intelligence in some way. It has had this meaning for decades. It doesn't have to succeed to yours or anyone else's satisfaction to be labeled AI.

  3. OrdoSeclorum

    I think the term AI is too vague to mean much and is context dependent. I think ChatGPT is definitely AI in the way most people use the term but it's not AGI and it's not consciousness or sentience.

    I'd say that a dolphin counts as an "intelligent species" if we met it on another planet. I think GPT4 is more intelligent than a dolphin because it can do many more things on the list of "stuff that requires intelligence" if I had written that list in 2020. But because a dolphin is conscious and aware and has agency we rightly consider its intelligence to be of a different sort.

    If we say that GPT4 can only behave intelligently because it's doing some sort of trick, then it's possible *I* am also not intelligent because my brain is just doing a trick.

    1. aldoushickman

      "I'd say that a dolphin counts as an "intelligent species" if we met it on another planet."

      I've always thought that this was a fascinating question. We normally think of finding intelligent life out there in the cosmos as beings flying around in starships. But what if we found a planet with things like cavemen? Or dolphins? Or wolves? Or octopods, or colonies of bees? Are those things "animals" or are they civilizations? And if they are civilizations on other planets, why not here, too?

      It's even more interesting when you thing about something like a coral reef. There's a whole ecology of different species interacting, some of the interactions of which depend not just on mere biology, but also on behaviors that look suspiciously like interspecies _culture_.

      We don't have to go so far as a Solaris-like question of whether we are capable of recognizing something as life, or whether something alien can conform to our definitions of intelligence--if we saw something on another planet like different species of reef fish lining up for parasite cleaning stations, what would we make of it?

  4. Christof

    Agreed. And our (real) AI overlords in the not too distant future are going to use this information to prove that we're not very knowledgeable and need to be led by them. All hail Colossus!

  5. Ugly Moe

    ChatGPT can probably be said to pass the imitation game test (or Turing Test).

    As far as I can tell the evidence in support of people thinking is quite thin.

    1. Joseph Harbin

      The Turing Test standard says that a machine that can fool a human into thinking the machine is human can be said to be intelligent. Lots of problems with that standard. For one, humans can be fooled by lots of things but that doesn't mean the simulation possesses the quality of the real thing.

      I think John Searle's Chinese room is a better alternative for understanding what's called AI.

      1. Steve_OH

        ??? Searle's Chinese room isn't a test of AI, it's an argument to support the claim that there is no such thing as Strong AI (nowadays usually called "General AI").

    2. CAbornandbred

      "As far as I can tell the evidence in support of people thinking is quite thin."

      Certainly a large group of people today can accurately be described as not thinking, aka thoughtless, pitifully unaware, smart as a stick of wood (sorry, wood for the comparison), or nowadays right wing White Supremacists, or even better racist, antisemitic, homophobic "people". They believe what they're told to believe. No actual thought required. See Kevin's post on why Republicans think Biden should be impeached.

    3. kennethalmquist

      I don't think ChatGPT can be said to pass the Turing Test, but the primary givaways are things that have been trained into it. (Sample exchange: Q: Do you love me? A: As a machine learning model, I am not capable of experiencing emotions such as love....)

  6. lawnorder

    Dictionary issues are seldom worth arguing about. By Kevin's definition of "artificial intelligence", ChatGPT is not it. By other definitions, it is. What's needed in such disputes is to agree on a clear, specific definition of the terms before arguing about whether or not the terms apply.

    1. Joseph Harbin

      You can probably find a scientific consensus on a definition of AI. But don't expect others to follow it. The marketeers have a new buzzword for making money and no one's going to stop them.

    2. Citizen Lehew

      Software engineers already do mostly agree. I think Kevin and Atrios are just confused about the distinction between AI and AGI.

      AI (Artificial Intelligence) is a computer doing things previously only doable by a human, such as image recognition. So yeah, ChatGPT.

      AGI (Artificial General Intelligence) is basically a computer having the ability to solve problems it wasn't trained specifically to solve, which can then potentially lead to the question of "sentience". That's the one that raises everyone's eyebrows. So no, not ChatGPT.

  7. JoeSantos

    I love a good semantic argument as much as the next annoying person but I think you and Atrios are sort of missing the point. The problem is that most of us nerds use "artificial intelligence" to mean "artificial sentience" and yeah, obviously ChatGTP isn't that. But if you use "intelligence" in the broader sense of a system that knows how to follow rules to produce useful results then sure, it's artificial, and it's "intelligent" in the way it's intended to be.

    1. kahner

      Totally agee that it's a definitional debate. But as a nerd I definitely don't think the braod definition of AI should require sentience. Hell, by that definition I don't know if we can ever even decide if something is AI. I'm not even sure I'm sentient, let alone anyone else.

  8. gVOR08

    Brad SeLong described Chat GPT as “stochastic parrots”. Seems about right. The good news is that all LLMs do is basically take a string of words and fill in a blank with what normally follows. The bad news is a lot of people do no better. And a lot of jobs demand no better.

    1. dotkaye

      thank you for the link, Ted Chiang is always worth reading..

      I work for a software company. The marketing folks decided on a relaunch and branding that called our software "AI-enabled". I'm proud to say there are enough software engineers left in the company, that we were able to kill that slogan..

  9. lcannell

    My hypothesis: for most, “AI” is any software that involves a non-transactional algorithm. Posting a comment on Facebook or buying something on Amazon is not AI because I understand the transaction. Otherwise, it must be AI. Google would brand google.com as AI if it launched today.

  10. golack

    They are "AI"'s, just not generalized AI's. If you use AI tools to create a system that is "trained" and uses that training to generate results, then it is an AI. Most systems are relatively simple, e.g. use AI to help evaluate mammograms. ChatGPT, etc., are just relatively simple AI's that predict words. They don't understand what words mean, just that they go in a certain order.

  11. Pittsburgh Mike

    You can't really write stuff like this without providing your definition *of* AI. It seems that you have something like Marvin the Paranoid Android as your model, but I think that's too restrictive a definition.

  12. Doctor Jay

    Once upon a time in the 80's I was a CS grad student at a leading school. For a part of that time, a leading AI researcher was chairman of the department. So we got a regular dose of the AI puffery of the time.

    A good friend - also a CS grad student who was not an AI guy - said something that has stuck with me, and describes the current discussion:

    "Once we know how to do something, it stops being AI" - Paul Asente

  13. DonRolph

    I am curious: how would you know?

    There is nothing which compels computer based AI to resemble human based intelligence.

    Indeed my sense is we even have a very poor definition of human intelligence or intelligence over all.

    So with a poor definition of intelligence, and there being no reason why computer intelligence should be of the form of human intelligence, how would you know if any program is in fact intelligent?

  14. kenalovell

    The Feedly news aggregator has what it calls an AI function which is hilariously stupid. It attempts to tag random stories and asks users to report if it was correct. It's idiotically wrong so often, I can only assume the users who bother to respond deliberately mislead it.

    Its tag for this post reads "Is this article about Advertising?" Click on "No" and it responds cheerfully "Thanks! Your feedback helps make Feedly AI smarter. Feedly AI was 77% confident this article was about Advertising."

  15. pjcamp1905

    So it is no longer going to be writing PhD dissertations in 5 years? I've argued for a long time that machine learning is not AI since intelligence, by any reasonable definition, is (a) a spectrum, not a point; and (b) is not the same thing as memory, which is essentially what machine learning is.

Comments are closed.