Skip to content

The beginning of the end of work

It begins:

When ChatGPT came out last November, Olivia Lipkin, a 25-year-old copywriter in San Francisco, didn’t think too much about it....In April, she was let go without explanation, but when she found managers writing about how using ChatGPT was cheaper than paying a writer, the reason for her layoff seemed clear.

....For some workers, this impact is already here. Those that write marketing and social media content are in the first wave of people being replaced with tools like chatbots, which are seemingly able to produce plausible alternatives to their work.

This is not what people expected when AI first became a topic of conversation years ago. Everyone figured the first victims of job loss would be blue-collar workers doing repetitive tasks: data entry clerks, customer service reps, taxi drivers, retail workers, and so forth. Then in 2017 Google published a paper about how to make Large Language Models work better and within a few years the most common form of AI was suddenly white collar and non-repetitive.

Automation has been taking jobs for years, of course. In that sense there's nothing unique about Lipkin's experience. Most likely she'll get another job, this time at a shop that needs writing skills above the bare minimum that ChatGPT can currently provide.

But there's still a difference. In the past, automation was a two-edged sword. Power looms took away weaving jobs, but there were new, better paying jobs tending and maintaining the machines. The transformation was slow and painful, but eventually everybody was re-employed and making more money than ever before.

Today there's nothing like that on the horizon. When ChatGPT gets a little better it will put copywriters out of work, full stop. Some of them will find entirely different work for a while, but none of them will be writers again. Nor will there be "writing machines" for them to maintain. The machines are just software in the cloud. They will maintain themselves with only sporadic help from a small cadre of programmers who neither need nor want help from former writers.

And so most writers will be out of work with no plausible alternative. At first it will just be entry-level writers, but very quickly the software will improve and journeyman writers will also be replaced. Then it will be the very best writers. Even poets and novelists will eventually be replaced.

And they'll have nowhere to go.

49 thoughts on “The beginning of the end of work

  1. Dave Viebrock

    Right now it’s largely free. Wonder how much this changes when they start charging for it? I’ve also owned stock in Nvidia for years, so I’ll probably make money if my disappears.

  2. AbolishFederalIncomeTaxes

    The distortion of the value of and return on human labor has been greatly constrained by Capitalism. I'm a big fan of Piketty and his views on return on capital vs. labor. AI will serve to accelerate this trend more quickly. Humans don't like change. Those who embrace it will prosper. That's why the MAGA's would prefer to burn it all down.

  3. GuyB

    So, who are these increasing numbers of laid off people going to vote for when they can't get comparable jobs? Assuming that our social safety net in the US is no better than it is today. Seems to me that they are going to be angry... kinda like the MAGA folks are angry about demographic change (or whatever root cause you believe motivates them). Any reason to believe that they won't vote for the next demagogue who promises to turn back the clock?

  4. jdubs

    Just a reminder that this is exact same take that came with the power loom and countless other improvements.

    This time is always different. This time the impact will be momentous and irreparable.

  5. skeptonomist

    The displaced hand weavers didn't get jobs maintaining the new weaving machines - the whole idea of the machines was that far fewer workers were needed to output a given amount of material. New jobs appeared in making entirely new, different and better things. This is how productivity increases, not by getting people to work harder for lower wages.

    What is different this time which causes so many people to write about the end of jobs? Wait - I think I may see a connection - writing is one of things that are supposed to be replaced. A lot of writing is repetitive and mechanical. Some stories in opeds and and blogs are especially so, for example about how on the one hand automation is doing away with jobs while on the other hand immigration has to be increased because there aren't enough workers. Come to think of it could an actual thinking human write stuff like this? Can we be sure that our favorite columns and blogs aren't already being done by chatbots?

  6. ScentOfViolets

    About that copy editiing/proofing, uh, no. Just no. LLM's will never be remotely competent enough to trust them to do this job. Panda(s) "Eats, Shoots and Leaves" is both grammatically and syntactically correct and an LLM checker would leave this one alone. But it's not remotely correct as any human copyeditor could tell you in an instant.

    1. aldoushickman

      "But it's not remotely correct as any human copyeditor could tell you in an instant."

      Well, sure, maybe. But (a) the example in the post was of a copyWRITER put out of work, and (b) editors or writers--ChatGPT isn't going to put *all* of them out of work, maybe just most or merely many of them. The remaining folks will spend more time checking machineprose for hallucinations and pandagrammar and less time drafting stuff themselves. Which of course will be cold comfort to the folks that lost their jobs

      1. ScentOfViolets

        Are you saying that any competent copywritier wouldn't necessarily be able to tell you that my excample was incorrect?

  7. painedumonde

    Could a LLM write a haiku about a pond? I think not. Sure it might try, but it will never trod that trail, surprise that frog, be enveloped by that splash.

    Maybe the writing KD speaks of IS just extruded product, no matter what writers think or say. Even there though, I think there's some spark that shines through that a machine might not ever connect with a human.

    Of course I'm biased...

    1. somefineusername

      We've created the possibility of a trillion monkeys on typewriters. They can do

      Emerald leaps, stillness breaks,
      Lily pad's gentle refuge,
      Croak whispers the night.

      Misty pond, serene,
      Leap of grace, a green dancer,
      Frog's croak, nature's song.

      1. painedumonde

        If those were machine haikus, they were wretched. If they were yours, they were wretched.

        Je suis désolé.

    2. golack

      With a enough prompts and tries, it will get to something reasonable. Especially if has found good source material.

      1. painedumonde

        With enough prompts and tries why not write it yourself? And your last sentence is the give away - it's all plagiarism.

    3. aldoushickman

      "Could a LLM write a haiku about a pond?"

      I'm pretty sure that it can. Can it write a great one, or a good one? Again, probably. But the issue isn't absolute quality--it's whether or not the LLM can write a haiku about a pond about as well as whoever is currently employed writking pond haikus.

      "I think there's some spark that shines through"

      Doubtful. We humans ascribe motivations and patterns to all sorts of things whether or not anything is really there. And even if there is some sort of detectable human "spark" in a piece of writing, I doubt that you'd be able to discern the sparkquality of ten human novellas on the one side versus ten machine-written novellas curated by an editor from the thousand the LLM was tasked with preparing.

      1. painedumonde

        The haiku I'm referencing is one about self discovery and wild enlightenment, not a frog, nor even a pond. Maybe but doubtful.

        I've already read some stuff and it's putrid, Ayn Rand level putrescence. OTOH, I can't tell you how many books lay unread in my home because of the same quality.

  8. Keith B

    Hold on. These large language models take in a lot of written material from the web and possibly other places and generate text based on a model of what's likely to come next. What happens when a larger and larger amount of the material they're working from is generated by other ChatGPT programs? Will human literature on all levels become the product of AI's just repeating and re-arranging the words of other AI's, ad infinitum?

    1. weirdnoise

      Language evolved from human experiences. However good LLM's become at elocution, they'll still be deriving the content underlying the words from the previously described experiences of actual humans. We'll always be an integral part of the loop of communications, but a lot of the boilerplate of our day to day interactions will be done by AI. And opportunities for people who sit in front of their computer and make money rehashing words will be few. But those with human bodies and senses who actually interact with the strange new world that is evolving will always have a place if they are good at casting those experiences into words.

    2. bouncing_b

      I read that because they know this, the LLM builders have a huge incentive to be able to distinguish LLM product (to be excluded from their training sets) from real human product.

      Maybe they can and maybe they can't, but I'm hoping they can, and thus we all will be able to.

      Adobe is supposedly developing some kind of token to be inserted in AI Photoshop product, which is a different solution (that doesn't seem like it will work).

    1. weirdnoise

      I think a good fraction of "hallucinations" are due to the relatively unrestricted training sets used for LLMs. Yes, the systems will confabulate, especially when requests are ill-posed. But part of what the models are trained with incorporate false hypotheses and even outright lies. Lacking labels as such, these things will leak into responses.

      There is a point -- and we may well have already crossed it -- where the factual accuracy of writing produced by humans will be less than that produced by LLMs. But in the latter case, who gets punished for crossing the line?

      1. golack

        I was reading somewhere that the training databases filter out unneeded words to speed the process. Words like "no" and "not" and, well, anything negative. LLM's do not understand, they just string words together. "It's sunny out" and "It's not sunny out" are just strings of words. I don't think they've found a way to properly include negations, hence their elimination. They wouldn't want a chatbot to generate "It was a beautiful not sunny day".

  9. Pingback: The beginning of the end of work | Later On

  10. mmcgowan1

    LLMs are good at writing brief articles from existing content on which it's been trained. For example, "Write me a blog post on the best places to eat in New York City." It is unable to write fresh content about new events, discoveries, concepts, etc. that don't already exist somewhere and it doesn't know about. It wouldn't be able to write, for example, the recent takedown article on Licht at CNN published in the Atlantic. Of course, you can submit a bunch of bullet points in a prompt and perhaps get something useable.

    A lot of existing content in the training materials (i.e., the Internet) is wrong, ambiguous, or confusing. LLMs are unable to detect errors and judge quality of information as well as some trained people. The need for a human editor will persistent to avoid humiliation from wrong statements, embellishments, and hallucinations.

    1. aldoushickman

      "The need for a human editor will persistent to avoid humiliation from wrong statements, embellishments, and hallucinations."

      The point isn't that humans will be completely tossed out of the writing process, just that *many* humans will be rendered obsolete/redundant. Rather than have a copywriting pool of 10 humans and two editors, maybe you have a pool of 2 writers and 3 editors plus an LLM subscription.

    2. Jasper_in_Boston

      It is unable to write fresh content about new events, discoveries, concepts, etc. that don't already exist somewhere and it doesn't know about.

      ChatGPT can't access information from 2022 and 2023 (other LLMs may work with different parameters?). But surely it's only a matter of time before all such models been improved to render them capable of searching for fully current information and news.

    1. KawSunflower

      Hey, maybe Kevin Drum is already planning to retire from this non-paying* gig just as soon as his successful treatment is complete - no more pushback or disses from any of us!

      * He doesn't exact a fee, but this costs him something.

  11. Narsham

    The extrapolation here seems unwarranted. On the literary side of things:
    1. Fanfiction and other free writing might be replaced by AI at some stage, but why? Writers not being paid are writing for fun or for some other reason, and it would be more expensive for them to use an AI than to just write. Those seeking free writing won't save money by going to AI instead.

    2. Strictly literary fiction (not popular fiction), poetry, etc isn't written to make money, either. Over time, using software to assist in writing may become accepted (just as nobody must write longhand to win an award), but I doubt Iowa will close down its MFA program because AI can write a poem.

    3. Mass market/popular writing will only see authors replaced by AI if the AI can write stuff that outsells human authors. That seems highly unlikely. Popular authors have distinctive voices. AI might be able to mimic those authors, but until they can make decisions based upon life experience they have no capacity to gather, that's the best they can do. And if someone develops a great Mark Twain AI that writes new Twain stories, that won't somehow corner the satire market. New writers will keep selling; I doubt AI can compete any time soon, and even once it can, why would I purchase a book written by an AI and published by a press if I can just ask the AI to tell me a story for less money? Either the publishers will have to prove value added, or they'll have curated AI authors, or they'll stick with humans who are not really that expensive until they hit it big.

    Yes, people paid to write social media copy, which tends toward the repetitive and isn't ever going to win an award, they're in trouble. And given the disinterest of many news organizations in actual reporting, reporters may be as well. But unless or until an AI writer can actually tell the difference between good and bad writing, I don't expect them to compete except in gimmicky ways.

    Again, we may see authors using AI assistants, especially for things like continuity, and we may see special cases. But if AI hasn't come close to replacing music composers, it's much further away from replacing authors.

    1. bouncing_b

      Agree, but in fact authors (even hugely popular ones) don't get a large fraction of the cost of a book. Those other costs will still be there so you're right that literary authors are probably ok for the moment.

      But for copywriting (topic of this post) that may be much less true.

  12. jeffreycmcmahon

    Calling social media writing and corporate copywriting "non-repetitive" is inaccurate. It's about as repetitive as writing can be, intended to be bland, and as uncreative as possible. This writing is easy for computers to duplicate because it's already tedious and anodyne. On the other hand, "Then it will be the very best writers. Even poets and novelists will eventually be replaced" is just nonsense that I don't think Mr. Drum actually believes.

  13. D_Ohrk_E1

    Those that write marketing and social media content are in the first wave of people being replaced with tools like chatbots, which are seemingly able to produce plausible alternatives to their work.

    I can confirm this. However, it's not so simple as just dropping in a few basic prompts. I know someone whose entire department disappeared and now he's the one doing all the marketing and social media. His prompts are paragraphs-long in GPT-4.0. As you might read in Open AI/GPT's Discord server, he too has basically written into the prompts a series of variables that can be adjusted with new prompts.

    It's safe to say that most of the big brand social media is actually being produced by GPT or some other LLM.

  14. Kalimac

    But don't AI writing bots generate their material from existing copy? Essentially they're rewriting or plagiarizing what human writers have already written. I can see their taking over stuff completely detached from any time-bound quality, but how are they going to write about new things - changing political situations, guidebooks reflecting changes in the places being visited - without new human-written material to be trained on?

  15. illilillili

    You still need people to run chat gpt and craft the query for it. Plus proof-reading and light editing of the output.

    Here's an anecdote. My 83 year old mother gets gigs translating from German to English. She has gathered a few pieces of software on her computer that do a lot of the heavy lifting. It's not that she brings mad skills to the table with in depth knowledge of German and English; it's that she knows how to drive a process that gets a good translation.

    Lawyers aren't going to start pushing buttons on their computers to draft boilerplate documents; they are stilll going to have their assistants push the buttons. The assistants will just be more productive.

  16. kenalovell

    Where did all the farriers and coach drivers and stable hands etc go when horse-drawn transport was phased out? It's a puzzlement!

    It's not novel for entire occupations to be wiped out by change - I had to change careers 30 years ago because my then-profession simply ceased to exist. People learn new skills and take up alternative occupations. And there are hopeful signs that this time around, some of the benefits of technological change will go to workers in the form of increased leisure, resuming the historical trend that stalled in the 1970s.

  17. pjcamp1905

    " Even poets and novelists will eventually be replaced."

    Bullshit, not with ChatGPT. You have to know and understand the human experience to make anything other than a pastiche, and super autocomplete simply doesn't. There was a big article in Quanta a week ago about how LLMs cannot handle negation. The meaning of stop words is hard to infer from surrounding words. But if you can't understand "Charles is not a llama," you're not writing poetry.

    I put this in the same bin as your claim that ChatGPT will be doing PhD level research papers in 10 years. That was bullshit too. LLMs literally do not know anything except "these words often occur together." That's simply autocomplete on steroids and you cannot build intelligence on such a foundation.

    ChatGPT is not the way to the dystopia you fear.

  18. sonofthereturnofaptidude

    I am about to retire from f/t classroom teaching high school social studies. After watching what cell phone technology has done to the behavior of high school students, I dread what ChatGPT et al will accomplish. The question of who AI will throw out of work is interesting. I'm planning to work as a private tutor, partly online, partly in person. If students can use AI to get an honors grade in AP psychology, my tutoring skills and deep content knowledge in the area might prove superfluous. I do have difficulty believing that current technology can write a decent DBQ response in APUSH.

    What I'm good at would be hard for a machine to do well, however learned it might be. But AI could shake up the current meritocracy enough that my skills might be unneeded. That might be a good thing. The current meritocracy is not all that fair, nor is it very good at creating elites that lead society very well. But no one really knows. That's why well-informed people who are paying attention are getting pretty anxious.

Comments are closed.