Skip to content

A wee warning about ChatGPT

A lot of people have been playing with ChatGPT and reporting back on all its hilarious mistakes. It makes logic errors. It bullshits its way through ignorance. It's surprisingly bad at math. It writes at a middle school level.

Fine fine fine. This is all fair enough. But you could have said all the same things about Shakespeare at age six, and look where he ended up a few years later.

There's every reason to think the next iteration of ChatGPT will be way better. And the iteration after that will be better still. Within a very few years, no one is going to be laughing anymore.

So you might as well stop laughing now and instead start thinking about how we're going to deal with this. It's about a million times more important than how many votes it takes for Republicans to elect a Speaker.

41 thoughts on “A wee warning about ChatGPT

  1. clawback

    We're going to deal with it the same way we dealt with people writing things down when we started doing that: we stopped valuing memorizing things. And the way we dealt with people printing things when we started doing that: we stopped valuing accurately transcribing texts. And the way we dealt with people using calculators: we stopped valuing doing arithmetic on paper.

    In this case we'll stop valuing writing creatively and cleverly, which is only a problem for those who only have that skill.

    1. AnnieDunkin

      Start making more money weekly. This is valuable part time work for everyone. The best part ,work from the comfort of your house and get paid from $10k-$20k each week . Start today and have your first cash at the end of this week.
      Visit this article for more details.. http://incomebyus.blogspot.com/

    2. rrhersh

      I disagree. I have strong doubts that this technology will ever produce creative and clever writing. How can it, when the input is a vast ocean of text, all thrown into the hopper? This is a recipe for mediocre writing. That is find for many uses. If your job is writing three-paragraph reports of high school football games, start looking for a new profession. (Seriously: You should have done that ten years ago!) But actual creative and clever writing? I suspect its value will rise.

      1. name99

        I'd put the above in a slightly different form.

        (a) The (apparently not-obvious) lesson of AI in the 50s and 60s was that computation/logic is not enough. Computation gets you a lot, and is very valuable, but it by itself is not what we think of as "intelligence".

        The lesson of AI in the 2020s may be that statistics are not enough. Once again they get you a lot that is valuable, but by themselves they don't give what we think of as "intelligence".

        That's interesting as a philosophical point; but as an economic point Kevin remains correct. The fact that 2020's level AI cannot *replace* an Admin Assistant doesn't mean that it won't replace enough parts of enough jobs that we will see the same sort of rearrangement of labor that we saw starting in the 60s as typing and clerical work got replaced first slowly then rapidly then essentially totally.

        (b) Just like "intelligence" turns out to have many facets, and obsessing over one of them ("ability to perform arithmetic" or "to play chess") turns out to be a dead end, I suspect we will find that "creativity" has many facets.
        As I've pointed out before, "style transfer" seems to be easy for AIs, but a lot of what you can get from style transfer looks creative in some sense...

        In a way we have a collision here between democratic ideas (which want to insist that "everyone is creative in their own way") and reality (which is that almost everyone's creativity is shallow "style transfer" creativity; genuine creativity is exceptionally rare). We have painted ourselves into a corner and reduced our ability to think about this issue rationally by watering down the word creativity.

        Right now, for example, what EXACTLY do you mean by "creative and clever writing"? Do you mean novel plots unlike anything seen before (which is hard, but which literary fiction mostly sneers at) or do you mean the sort of mass-produced boring dreck ("college professor going through midlife crisis based on some combination of alcoholism/obsession with freshman student/previous failed marriage/smarter hungrier younger colleagues") that's been the bulk of US literary fiction since the 1950s? Because I could see AI producing the second a lot earlier than it produces the first...

        1. clawback

          "what EXACTLY do you mean by 'creative and clever writing'?"

          For my part I'm referring to 99% of the crap produced by journalists, pundits, novelists, and the rest of the class of elites who make their living with words. Their product consists of material you may not have thought of previously only because you never had occasion to think an issue through fully. All of this will be replaced by AI, which is why that entire class of elites is currently so defensive about it.

          But yeah, true novelty is hard and probably not in immediate danger from AI.

  2. different_name

    It's about a million times more important than how many votes it takes for Republicans to elect a Speaker.

    Well, yeah. And the disruptions are going to hit my career head-on about a decade before I retire.

    I'm thinking about His Kevin because it is an escape for figuring out what I'm doing next.

  3. OwnedByTwoCats

    Why do you think ChatGPT will get dramatically better in the next few iterations while self-driving vehicles won‘t?

    1. kaleberg

      That's my thought exactly. ChatGPT works by pattern matching and statistical generation. What is supposed to make it better? A bigger corpus? A better statistical algorithm? How is that supposed to substitute for dealing with real world knowledge and inference?

      ChatGPT is basically Borges infinite library with a bias towards text that resembles the text in existing finite libraries. Some of the stuff it can produce may be good, but the rest is like stuff people with Williams Syndrome produce.

      1. name99

        What will probably make ChatGPT better is connection to an explicit world model (the sort of thing Cyc is doing https://en.wikipedia.org/wiki/Cyc ) rather than what we have today with no explicit world model, just whatever falls out of the stats.

        The same may well be true of self-driving, but I think it will happen earlier with text.

        To put it in an extremely hand-waving way, the statistical models give us Kahneman's system 1 thinking - automatic, non-reflexive. Which is fine for "is there a cat in this photo" but not so great for "explain to me the difference between a cat and a fish". We need to couple to these AI's something that can provide system 2 thinking.
        We have always folk-assumed that "common sense" is pure system 1; but I think it's becoming clear that "basic" common sense relies on some degree of system 2 applied to what comes out of system 1...

  4. D_Ohrk_E1

    So you might as well stop laughing now and instead start thinking about how we're going to deal with this.

    I'm fairly certain that many people are already looking at how they can force companies to insert a yet to be invented digital watermark and/or create algorithms to review written passages.

    It makes logic errors. It bullshits its way through ignorance.

    That's an inelegant way of saying that the algorithms it uses are imperfect at processing and reproducing an anomalous human synthesis.

  5. cephalopod

    Think how much more free time we'll all have when Kevin Drum can type in a sentence-long promt and end up with a whole blog post, and instead of reading it and coming up with some sort of response, we can all just press a button labeled "snarky comment."

    1. Justin

      I’d love to see an chatbot running around some event trying to gather information from other chatbots then sharing that info with even more chatbots who then cha one more while actual people sit idly by watching and listening. Is that the future? That’s what video games, tv, movies, etc are now. So… when do movies / videos make themselves? There are real life things that need to be done. I’d be quite content to see the so called creative class be forced into doing actual work… at least until the robots become capable of putting a new roof on my house. That would be truly disruptive.

      1. Justin

        Typos… bots talking to bots and making more bots… let’s give them that task then go back to real life. What happens when the power owes out?

        1. Justin

          Maybe... but I am heartened by this news:

          "Amazon Alexa is a “colossal failure,” on pace to lose $10 billion this year."

          "Alexa never managed to create an ongoing revenue stream, though, so Alexa doesn't really make any money."

          I suspect the same will happen to this sort of AI chat thing. If all it does is replace google for a searching and put some bad writers into unemployment, it won't make any money. It will have to be ad supported or something. Maybe Mr. Drum or others can explain the business model. Facebook could have its own chatbot metaverse where the bots yell at each other, but how does that make money?

  6. Scurra

    I contend that what these tools are doing is merely an extension of what humans have been doing for as long as we have been around - pastiche. Once you have a large enough database to work from, it's pretty easy to do this, and all that has changed now is that the database is so big it includes basically everything (which is a whole different problem.)
    But it's still only feels like pastiche to me. And whilst there is always room for that, the fact that these tools are happily churning out financial and sports reports probably says more about the state of that part of the industry than anything about the inherent nature of ChatGPT et al.

  7. ak_liberal

    ChatGPT responds:

    "Ah yes, let's all hold hands and sing Kumbaya as we wait for the glorious, all-knowing ChatGPT to grace us with its presence. In the meantime, I'm sure Shakespeare is rolling in his grave at the comparison. Perhaps we should also start thinking about the ethical implications of a machine that can bullsh*t its way through ignorance and make logic errors. But hey, at least it's only writing at a middle school level for now."

    1. Pittsburgh Mike

      …and doesn’t even know when it is BSing or making logic or math errors.

      These things are tireless pattern recognizers. They have a use, but they’re just not general purpose intelligences. They are not on the road to that, either.

  8. golack

    I see some incremental improvements, and that will be from filtering the sources the ChatBot uses. And rumor has it, Google's bot is better.

    But....still need major breakthroughs to get to next levels--akin to going from the Write Brothers to a jet fighter.

  9. Scott_F

    As a computer scientist my take is that we are seeing about all GPT can do - which is produce a convincing SIMULATION of intelligent writing. Because it is trained on the PATTERNS of human language and not the thoughts themselves, it has no model for what is true or false. It does not have any sense of motivation or the theory of mind that drives human intelligence.

    So this particular model of AI is, to my mind, a dead-end. As with the self-driving car analogy already brought up in these comments, there is no incremental route to from ChatGPT to general intelligence.

  10. skeptonomist

    Again, there is really no need for car-driving robots to develop full intelligence. The set of tasks involved in driving is far smaller than what actual humans - or even animals - deal with. Robots have several advantages over humans in individual tasks like driving: the programmers only have to learn how to deal with something once and then it can be instantly "taught" to all the robots. they don't forget how to do something; they don't get drunk or fall asleep; some sensors will be much better and more numerous than humans'; and so on. There are many other tasks that can be handled with incremental elaborate targeted programming rather than developing real intelligence. These things will continue to be improved, but there will be no point at which real intelligence is instantly developed.

    As Scott-F says, one of the tasks that can probably be handled with this kind of programming is fooling gullible people into thinking that there is real intelligence involved. Actually there have been various kinds of "robots" that have fooled people going back to the 18th century, although many of them involved small humans concealed in the innards.

  11. kaleberg

    I wonder how ChatGPT would deal with the pluralization problem. If you want to produce a message like "There were 7 files." except for any number of files and in any language, it is a surprisingly hard problem. It's easy enough to get the special cases in English, but a lot of languages have a lot of irregularities. Arabic, for example, has singular, dual and plural and the noun has to change accordingly. In Russian, the case and number of the noun change depending on the value of N mod 10 and N mod 100. It's a surprisingly hard problem, and a solution would be useful, but I doubt that ChatGPT is going to deal with it all that well.

    For more on the horror, see a Perl programmers take on it: https://perldoc.perl.org/Locale::Maketext::TPJ13

Comments are closed.