Skip to content

Writing is over! (Someday)

Jane Rosenzweig writes in the LA Times today about the perils of AI:

Soon after ChatGPT was released, an artificial intelligence researcher from one of the big tech companies told me that I shouldn’t worry about how the technology would affect how students learn to write. In two years, she assured me, only aspiring professional writers would enroll in writing classes; no one else would need to write anymore.

I've got good news and bad news. The good news is that two years is a laughably brief timeline. The researcher in this anecdote is generally right, but it will take a good deal longer than that.

The bad news is that it's not just writing. In a decade or two, when AI and robotics become really good, there will be little motivation for humans to learn anything. Why bother when AI is everywhere and can be better than you at literally everything. I can see elementary school surviving so that we humans know the basics, which are handy to have quickly at hand, but five grueling years to get a PhD when any research you might do can be had in a few hours or days just by asking your friendly local AI? I don't doubt that there are some people so obsessed they'll do it anyway—with an AI as their advisor—but it's going to be a pretty small number.

Someday. That's really the only question. It's going to happen, we just don't know precisely when.

71 thoughts on “Writing is over! (Someday)

  1. antiscience

    Kevin, you're predicting Artificial General Intelligence (enough to do original experimental research in hard science) in "a decade or two" ? Really? Wow, I wouldn't be so sure of that.

    1. kahner

      me neither. but then again, i wouldn't have predicted the current state of large language models a just a couple years ago.

      1. Crissa

        They were getting pretty good, honestly. Sneaky, since they were only being used for bad customer service and troling, but...

        1. kahner

          i'm sure they were, but i wasn't aware of the progress, and i'm assuming there's ongoing progress both in LLM's and other types of AI that i'm also unaware of. so i'm not as bullish on it as kevin, but i've updated my priors and won't be surprised by major near term improvements.

  2. memyselfandi

    This is nonsense. Getting chatgpt to stop lying and making up facts/quotes will require a complete paradigm shift in how the algorithm operates. people are impressed that it can pass the bar exam. But it will get the exact same result whether the framework is the bar exam or the framework is the actual framework in which lawyers work, i.e. they have open ended time with access to reference databases. In that framework, lawyers are expected to get perfect and chatgpt's results are not even remotely close to acceptable.

    1. geordie

      I don't agree Kevin's point is nonsense but I do agree that the ability to integrate "research" will be important. ChatGPT as it exists now is an incredibly well-read generalist which causes people to both under and overestimate its capabilities. Toddlers are fabulists too but as they learn more about the real world that declines. Later people as they specialize, and learn more about a specific domain, their answers within that area become more and more accurate. Then their ability to determine which new ideas are illusions and which are breakthroughs becomes stronger. I see no reason why that would not also be the case for LLMs. The idea vectors just need a bit of retroactive coercion towards the topic specific datasources. I don't see that as a paradigm shift but an additional layer. The ability to create coherent, grammatical, internally consistent prose is not enough to be intelligent but it was likely one of the harder problems that had to be solved on the road to general AI.

      1. realrobmac

        How do you imagine something like ChatGPT is going to do original PhD level research? At this point ChatGPT is fed by the work product of people. It's not doing original medical or astronomical research. You really think that is going to change?

        1. DButch

          I think it's worse than that. How does ChatGPT recognize well verified and backed EXISTING research? It's very good at analyzing, say, a huge pile of text data and constructing a massive map allowing it to predict what word should follow from the last N words prior, but how does it detect when the map goes "off". How can it tell that a legal cite is made up? How do you prevent it from making up a legal cite?

          I read about the recent case where a lawyer relied on it to do up his cites - and it was all faked. He's in a bit of trouble now - judges don't like that shit.

          1. MrPug

            ChatGPT isn't doing any analysis. It is doing pattern recognition and then the sort of inverse of that pattern generation based on absolutely no analysis.

            Given that the source of the training material is content on the internet, I keep asking the question of what happens when more and more of the content was actually generated by ChatGTP like LLMs. I think it will likely degenerate into gibberish.

    2. Salamander

      It's not that big a deal for a lawyer to ... stretch the facts, or make up stuff. It's almost part of the profession. But what about Chatbot designed bridges?

      1. aldoushickman

        "It's not that big a deal for a lawyer to ... stretch the facts, or make up stuff."

        I dunno, it's a pretty big deal for a lawyer to cite nonexistent laws, or base arguments on fictional cases. You get disbarred for that sort of thing. But that's what ChatGPT does.

    3. kahner

      looking through the comments, it's amazing to me how quickly the insane technological leap of chatGPT and other LLM's has been normalized and dismissed. just a few months after release people are already yawning and predicting "well, sure it can do that but it will NEVER be able to to XYZ". um, ok. but i wouldn't bet my life on it.

    4. rrhersh

      I wonder if ChatGPT's facility with bar exams is not because its training data included a lot about how to pass a bar exam. This is in much the same way that it can do a pretty good job rehashing an undergrad term paper assignment that has been given a gazillion times before. Try asking it something more obscure it and rapidly devolves into handwaving of the sort usually associated with the undergrad starting work on the paper the night before it is due.

      1. ScentOfViolets

        As I've said before, that ChatGPT is able to pass a bar exam is not evidence of its intelligence; it's evidence of how easily the bar exam (and others like it) can be gamed with the right training, i.e., prepping of the sort available only to the well-off.

    5. Crissa

      No, it doesn't, it just requires a fact-checking layer. If it can dissext written input - and it can - it can check for facts.

      1. DButch

        What are these "facts" you talk about, Crissa? /s

        How do you implement a fact-checking layer? Right off the bat, I think a structured and very well curated body of fact is needed, organized in a way that ChatGPT or other LLM implementations can use it as a trusted artificial expert/consciousness to override a line of inquiry going wrong.

        Of course, that gets us into the what does "going wrong" mean? Hmm - and what does an artificial expert/conciousness look like? And how do you get it to interface correctly with the LLM.

        And I think I just got into an area of post-doctoral research that I won't survive to complete...

        1. KenSchulz

          ‘Well-curated’ by whom, or by what? If it is by humans, you have merely re-invented an ‘expert system’, a technology which is decades old and has not evolved into GAI. Implementing an artificial curator is an entirely different problem from building an LLM.

        2. Raven

          > “I think a structured and very well curated body of fact is needed”

          Lovely, as long as: (1) it includes *all* the facts that might possibly [need to] be cited; and (2) the actual real-world facts haven’t *changed* since the last curation.

          Now who’s going to fill-in/update that on a timely, complete, and accurate — or just reliably truthful — basis, given (for instance) Wikipedia’s slow and unsteady record even with the partial coverage it supplies, such that teachers won’t allow students to cite it? What if our editors turn out to include COVID-deniers, anti-vaxxers, bigots, or devotees of other delusions?

          I’m reminded of the early pages of Charles Fort’s ‘The Book of the Damned’, arguing that science is inherently incomplete, not knowing everything yet with a history of denying whatever it did not know. Past ‘damned things’ included meteorites and the movement of continents. How could a ChatGBT have researched or reported those, when that time’s ‘curated body of fact’ excluded them? How could it research or report any new thing now, if our time’s ‘curated body of fact’ excludes it? Or includes errors?

          Ask ChatGPT now, ‘Who is Prigozhin?’, or ‘Has Donald Trump been indicted?’, and see how up-to-date its ‘body of fact’ is. To me it said September 2021. Try writing an assigned brief bio on either man without mentioning events since then.

          1. Raven

            Likewise ask ChatGPT:
            • Who is Ketanji Brown Jackson?
            • Who is Shinzo Abe?
            • Name Twitter’s CEO.
            • Name the monarch of the United Kingdom.
            • Has Ernest Shackleton’s ship Endurance been found?

            All out of date, with ‘knowledge base’ as of September 2021.

  3. kenalovell

    Kevin doesn't appear to understand what a PhD is. Typically, it "is awarded for a thesis (or a series of published papers), drafted under supervision, which makes an original, significant, and extensive contribution to knowledge and understanding in a field of study." By definition, a research higher degree generates knowledge which did not exist prior to the research project; it would therefore be impossible for any AI program to write a thesis based on existing information.

    For my professional doctorate, I had to conduct numerous lengthy interviews, design and carry out a questionnaire survey of hundreds of employees, collect enough corporate documents to fill one of Donald Trump's boxes, and analyse the lot to answer the research question I was investigating. I'm unable to envisage how any AI software could do that.

    On the other hand I can imagine how it could (a) review the available information about a topic of interest, (b) suggest useful research questions about aspects of the topic which remain unknown, and (c) propose the most appropriate research design. For example it could provide an overview of the known advantages and disadvantages of gender reassignment surgery, suggest fruitful avenues to improve our (or its!) understanding, and design projects to gather that information. But I can't see how it could actually carry out the research without a massive change in the relationship we have with technology.

    1. museumatt

      I was more struck by his assertion that it only takes 5 years to get a PhD. What's the average these days? 6-12? Mine was 9 and that was quicker than average in History.

    2. Joel

      My PhD was in the biological science (genetics). I've mentored or served on the dissertation committees of over 40 PhD students. I'm confident that ChatGPT could write a credible literature survey to serve as the introduction of a dissertation. But that literature forms the foundation of a bench research project that ChatGPT, not being an actual robot, can't perform or interpret. I guess some PhD students could skate by on their mentor's guidance, but most will have had to read that literature anyway to design their project and interpret their data.

      1. rrhersh

        I am currently writing a paper on an obscure point of church history, arguing that a certain guy who is well known (in the right circles) to be important in a different part of church history is important in this part as well. For grins and giggles I asked ChatGPT to take a stab at it. It produced a serviceable partial bio of the guy, using information that is widely available. The, faced with establishing the thesis of the paper, it rapidly devolved into handwaving. This seems about right.

      1. cld

        But -- I did!

        It was the story posted just below that finally went after I put in the comment complaining about it.

        Coincidence??

  4. jdubs

    This has always been the case. We are forever in the midst of societal breakdown and people losing their drive or initiative due to whatever new thing has come into being. The thing is always here right now, signs of the coming collapse are present, but the real breakdown is always a few decades into the future......

  5. cld

    A couple hours ago, by a lucky chance, I tried out the online version of ChatGPT.

    I asked it,

    Write a one page story about the life of yesterday's Mouseketeer now bad girl Jennifer Souffle.

    Once upon a time, in the enchanting world of showbiz, there was a bright and promising young talent named Jennifer Souffle. As a Mouseketeer, she dazzled audiences with her infectious smile and boundless energy, capturing the hearts of fans worldwide. But little did anyone know that behind the scenes, Jennifer's life was about to take an unexpected turn.

    Born into a family of performers, Jennifer's destiny seemed written in the stars. From a young age, she possessed an undeniable magnetism and talent that drew people towards her. With her charismatic personality and undeniable charm, she quickly rose through the ranks and secured a coveted spot as a beloved Mouseketeer.

    During her time on the famous children's show, Jennifer thrived in the limelight. Her graceful dance moves and flawless singing voice made her an instant favorite among fans of all ages. Her dreamy blue eyes and golden locks were as enchanting as her sparkling performances.

    However, as Jennifer grew older, the pressure of fame began to weigh heavily on her shoulders. The transition from child star to young adult proved challenging, and Jennifer struggled to find her place in the ever-changing entertainment industry. The spotlight that once bathed her in adoration now cast a harsh and unforgiving glare.

    Seeking to break free from her squeaky-clean image, Jennifer embarked on a rebellious journey. She shed her innocent persona like a worn-out costume and embraced a darker, edgier side. With her newfound identity, she became the "bad girl" of Hollywood, stirring up controversy and leaving her Mouseketeer days far behind.

    Jennifer Souffle's transformation captured the attention of tabloids and gossip columns worldwide. Her rebellious antics and wild parties became the stuff of legends, fueling speculation and curiosity among her devoted fans. Paparazzi relentlessly pursued her, capturing every scandalous moment and turning her life into a never-ending spectacle.

    Behind the headlines, however, Jennifer yearned for something deeper. The fame and glamour had lost their luster, leaving her feeling empty and lost. She longed to rediscover the pure joy she had felt during her Mouseketeer days, before the pressures of fame had tainted her spirit.

    One fateful day, Jennifer stumbled upon an old video of herself as a Mouseketeer, filled with innocence and hope. Tears streamed down her face as she watched her younger self sing and dance with unbridled joy. In that moment, she realized that she had lost sight of who she truly was.

    With a newfound determination, Jennifer made a bold decision. She would reclaim her identity, not as a "bad girl" but as a woman embracing her authentic self. She embarked on a journey of self-discovery, seeking out mentors who would guide her towards a path of personal growth and fulfillment.

    Slowly but steadily, Jennifer reemerged into the public eye, not as a controversial figure, but as an inspiration. She used her platform to advocate for mental health awareness and promote positive self-image among young people struggling with fame and societal pressures. Through her honesty and vulnerability, she became a beacon of hope for those who had lost their way.

    And so, the tale of Jennifer Souffle, the Mouseketeer turned bad girl turned advocate, serves as a reminder that even in the face of adversity, it is never too late to rewrite our stories, reclaim our true selves, and inspire others along the way.

    This would have made a great movie of the week. Exactly the kind of thing they're looking for.

    1. Salamander

      Yeah, on the Hallmark channel. Maybe you can shop it to Disney ... although it's a little too saccarine and, let's face it, cliched even for The Mouse these days.

    2. ScentOfViolets

      Jennifer Soufflé was a talented skater, destined to compete at the highest level in the Olympics. But then, tragedy struck ...

  6. cld

    from,

    ChemCrow: Augmenting large-language models with chemistry tools,

    https://arxiv.org/abs/2304.05376

    Over the last decades, excellent computational chemistry tools have been developed. Their full potential has not yet been reached as most are challenging to learn and exist in isolation. Recently, large-language models (LLMs) have shown strong performance in tasks across domains, but struggle with chemistry-related problems. Moreover, these models lack access to external knowledge sources, limiting their usefulness in scientific applications. In this study, we introduce ChemCrow, an LLM chemistry agent designed to accomplish tasks across organic synthesis, drug discovery, and materials design. By integrating 18 expert-designed tools, ChemCrow augments the LLM performance in chemistry, and new capabilities emerge. Our agent autonomously planned and executed the syntheses of an insect repellent, three organocatalysts, and guided the discovery of a novel chromophore. Our evaluation, including both LLM and expert assessments, demonstrates ChemCrow's effectiveness in automating a diverse set of chemical tasks. Surprisingly, we find that GPT-4 as an evaluator cannot distinguish between clearly wrong GPT-4 completions and Chemcrow's performance. There is a significant risk of misuse of tools like ChemCrow, and we discuss their potential harms. Employed responsibly, our work not only aids expert chemists and lowers barriers for non-experts, but also fosters scientific advancement by bridging the gap between experimental and computational chemistry. Publicly available code can be found at this https URL

  7. Leo1008

    I assume this is meant as some humorous trolling to generate discussion:

    “In a decade or two, when AI and robotics become really good, there will be little motivation for humans to learn anything.”

    In decades, and maybe even in centuries, our motivations will likely be just as complex as they are now. And we’ll continue learning because it’s an integral part of being alive.

    Even if we don’t need to learn something for a job, we will still keep learning for the meaning and purpose that learning provides in our lives.

    Why, after all, is Kevin still writing this blog even after it’s no longer part of his job @ Mother Jones? And I bet he’d still keep writing it himself even if he could ask an AI to write it for him.

    1. Bardi

      Excellent observation. Who determines "new facts" and will that change the POV presented as an AI's output?
      AIs can be used as a way to manage power. I am not saying that is how they will be used, just it seems to be another tool for donnie's to acquire power.

  8. frankwilhoit

    The question is not, what can AI do at its best? The question is, how do we retain and sharpen our ability to tell when it is at its plausible worst?

  9. Jasper_in_Boston

    when AI and robotics become really good, there will be little motivation for humans to learn anything.

    The study of foreign languages will atrophy massively. Why bother to spend years slaving over Mandarin or Arabic lessons when a real time AI translator does it for you? Looks to my eyes like 70-80% of the necessary technology is already there. We just need to complete the gaps, perfect and polish a few rough edges, and integrate it all into an easily usable package.

    1. Salamander

      It's roughly analogous to how writing -- you know, using a pencil or pen to make marks that look like letters? -- is no longer being taught because keyboards.

    2. kaleberg

      It depends on what you are translating. AI can handle a lot of stuff, but it's trickier if you are translating literature or poetry. It's a tough job turning a great work in one language and culture into a great work in another language and culture. Legal translation has similar problems. Just moving from the UK to the US requires understanding the precise legal use of "which" versus "that" and "will" versus "shall".

      Automatic language translation is a lot better than it was 20 years ago when it was laughably bad. It's actually useful now, and even better now that text recognition is better than wretched. If it is going to do professional grade work, it will need to be extensively trained in the target domain. If it's a legal document, I'd still recommend hiring a lawyer familiar with the language and law involved.

  10. Doctor Jay

    Well, let's consider the game of chess. Computers (known as "engines") are decidedly better than humans at chess these days. Has that meant most humans have stopped learning and playing chess? Not at all.

    Instead, those humans are using the engine to learn, to get better. There are "engine lines" that it is avowed, "no human would play that", and there are also certain situations where the engine, for reasons that are mysterious to me, can't quite figure out that a move is good until after it has been played.

    Mind you, the combination of engines and online play has shifted chess toward speed chess, which makes it much harder to cheat by consulting a computer between moves.

    1. kaleberg

      When symbolic math programs started getting good in the late 1960s, people were predicting that no one would need to learn algebra or calculus. Nowadays, just about every serious professional math user has a subscription for Mathematica or some similar system, usually access to an enterprise license. Despite this, they are still working with mathematics and still needed to, at some point, to learn algebra and calculus.

      Why? The short answer is that if you are going to train a dog, you have to know more than the dog.

    2. aldoushickman

      "Computers . . . are decidedly better than humans at chess . . . Has that meant most humans have stopped learning and playing chess? Not at all."

      That is a very, very inapt analogy. Chess is a pastime--a *game* for chrissakes--something people do for *fun*.

      I suppose you could invent an AI that is better at taking a nap, eating a good meal, or enjoying a walk in the sunshine than I am, but that's not going to cause me to give up and stop doing those things.

      1. rrhersh

        The analogy is in response to "In a decade or two, when AI and robotics become really good, there will be little motivation for humans to learn anything." Should we take from your rejection of the analogy that you don't enjoy learning stuff? That is very sad.

        1. aldoushickman

          "Should we take from your rejection of the analogy that you don't enjoy learning stuff?"

          WTF would you think that? I never agreed with Kevin; I just think that saying "well, computers are better than humans at playing a fun game like chess, yet we still play chess!" is a foolish way of thinking about what happens if computers get better than humans at *work*.

          Unless you think that 100% of the motivation that every person has for learning things is the pure intellectual joy of it and in no part because it's useful for getting a job, then yeah, the fantastical strong AI Kevin imagines will probably affect human motivations for learning information.

        2. Doctor Jay

          Thanks, I couldn't have said it better. I'll make a bolder assertion now: Most of the people writing comments on this blog like writing and are interested in getting better at it because it's fun and they like it.

          I think we will find a place for the LLMs in the world, yes. I don't think humans will stop expressing themselves. Not in a million years. In fact, even now, humans are using AI to learn to be better at expressing themselves. I am constantly inundated by ads for Grammarly, which is pretty much AI applied to writing.

      2. KenSchulz

        See Leo1008 above, commenting on KD’s statement that “there will be little motivation for humans to learn anything.” If learning new things isn’t something you do simply for the enjoyment of it, I think you are unlike most of the commenters here. And in fact, unlike Kevin, though he seems to have lacked self-awareness in that moment.

      3. Narsham

        And nobody writes for fun?

        Magnus Carlsen was making over $1 million a year playing chess; most top grandmasters make half that, but still considerably more than I make in a year. Even if most people play chess for fun, there seems to be enough entertainment value in it that people can make big money playing it as a sport.

    3. Narsham

      Right now, most of the big claims about ChatGPT and similar "AI" comes from people who stand to benefit personally from people believing it is fantastic. It's like asking tobacco executives whether cigarettes were safe or polluters whether lead is harmless: all their incentives work against their telling the truth.

      There was a clear point beyond which chess engines were producing analysis or new approaches to positions and those who study and teach chess found value in what they were producing and started using engines and engine lines in training people to play.

      Can anyone interested in the current brands of "AI" point to artists or writers, people who train poets or journalists, and show how they are reacting in similar ways to what happened when chess engines shifted from being able to beat Kasparov some of the time to being rated far higher than any chess master?

  11. realrobmac

    "but five grueling years to get a PhD when any research you might do can be had in a few hours or days just by asking your friendly local AI"

    This is the dumbest piece of AI hype I have ever read.

    1. Pittsburgh Mike

      Right now, LLM systems don't really have a model of the world. What they have is a model of language (hey, it's right in the name!) and enough smarts to grammatically generates sentences from the various texts that match a search.

      It's hard to see how this will replace anyone doing anything really creative or interesting. I wouldn't even trust it to write a will, something that's pretty cut and dry, since the reason you hire a lawyer is that they know how to combine what you want with case law where someone else was stymied in the past doing the same or similar things.

      Of course, with wills, you're dead, so it's only your kids who have to deal with the effects of an incompetent AI 🙂

      1. kaleberg

        My lawyer's parents built a concrete "mausoleum" to hold their ashes on their property in hopes of "keeping it in the family". He and his sister sold it, subjecting to removing that monstrosity. They had a great bonding experience involving rented jackhammers. Let's see an LLM do that.

    2. KenSchulz

      Agree. Kevin persistently ignores the difference between computation-limited and data-limited problems*. Gathering and analyzing data to shed light on significant problems is painstaking, time- and resource-consuming work, often requiring innovation in measurement and/or analysis.
      *An overly simplistic distinction, but you get the idea.

  12. different_name

    I know Kevin has seen naive extrapolations lead to absurd predictions before. The question is if he is being trolly or actually has a blind spot for "AI".

    I think in this case it is a little of both. For instance he knows full well this is nonsense:

    I can see elementary school surviving so that we humans know the basics

    Come on. Unless AI puts everyone (except teachers?) out of work, school-as-babysitter is way too important to the economy for this to happen. We'd literally lock them in a dirt pen before allowing Mommy's missing a shift to effect the store.

    1. rrhersh

      He has a well established blind spot for shiny tech in general. He was all in for fully self driving cars being just around the corner. What we actually have is almost-but-not-quite fully self driving cars in carefully geofenced regions that have been, and continue to be, expensively mapped out in glorious detail. This is great for people who never leave rich white neighborhoods, but it is not what we were pitched

  13. Larry Jones

    The march to AI-replacement graduate degrees will be slowed somewhat when the companies that control the technology figure out that a real PhD costs ~$100K*, and apply a comparable tariff to an AI-assisted diploma. These days, you have to be smart, ambitious and wealthy to acquire an advanced degree. In the AI future, you'll just have to have the money. I mean, there wouldn't be "scholarships" to use AI, would there?

    * Just a guess. I ain't no PhD.

  14. cephalopod

    We better get AI working on new dementia drugs! With nothing for human brains to do they'll atrophy that much faster. Or maybe we won't care if we spend a couple decades staring vacantly into space - as long as the robots remember to change our diapers and shove food into our faces.

  15. kaleberg

    I think the (someday) says most of it.

    I remember when they predicted the end of writing when we got telephones, voice mail and cheap recorders. Instead we got texting, email and blogging.

  16. skeptonomist

    Have chatbots shown the ability to predict the future? This is a pretty hard thing to do and I doubt if machines will be able to do it for a very long time. Until then there will be jobs for humans who can predict the future, like Kevin.

  17. name99

    Science (and other) PhDs used to be all about "people so obsessed they'll do it anyway" until somewhere around the 60s I guess, they became some weird hybrid of social one-upmanship and job guarantee.

    I suspect it will be better for those who actually care about knowledge and its production that we go back to that pre-60s situation. So I'm not sure exactly what the problem here is.

    (But yeah, imagining that *CURRENT* LLMs are capable of genuine useful knowledge synthesis, let alone production?!?
    That tells you everything you need to know about our current credentialed class, and why losing them will be a substantial net plus.)

  18. Narsham

    Couldn't you have claimed the same thing about libraries, or the Internet? I can just ask a librarian and they will conduct the research and I don't need to know anything. I can just Google a question and the knowledge will be right there. Except that the reverse happened and it became more and more important, as more information became available, to have sufficient knowledge to error-check. Wikipedia is, I firmly believe, more reliable than old-school encyclopedias, but it can be subject to error and being able to identify when someone is giving you bad information becomes more and more crucial as more information becomes available to you.

    To pick a personal example: Kevin, you aren't trained as a doctor. You didn't attend medical school. You (or your insurance) is paying experts who spent decades studying to make decisions about your care. And yet, you not only conducted your own research into treatments for your specific brand of cancer, you ended up arguing with your doctor over treatment. Without doing that, you would not now be undergoing the treatment you're receiving. But the doctor knew far more than you.

    If instead of a doctor, it had been a medical AI, would you have opted out of doing the research yourself into treatment? Would you have opted out of arguing with the AI about the treatment you were receiving? No AI has been designed that is free from all error, and I don't expect that will ever be possible. And that assumes that this "final" AI will never be subject to undue influence, hacking, bias, or modification, and that AI will somehow be able to communicate with and convince human beings without in any way being influenced by humanity, which is itself a huge source of error, bias, and emotional denial.

    So far as I can see, current AI is tailored to fill customer service roles, handle complaints, make appointments, and perform writing tasks for executives who once relied on their secretaries to transform their barely literate scrawlings into something intelligible. There's no evidence yet that the LLM is anything more than an early step toward an actual artificial intelligence, perhaps a segment of such an AI's "mind" that would permit it to communicate with us.

  19. jeffreycmcmahon

    This just sounds like nonsense. Right now AI is very advanced autocomplete, and autocomplete can't actually _do things_.

  20. painedumonde

    It occurred to me that the hallucinations, the manufacturing of references, the lying and hand waving are emblematic of a child...is it in process of learning to be human, not striving for intelligence?

    Anyway, once Capital sees the revenue stream of upper education under threat it will be protected.

Comments are closed.