Skip to content

Fake news is now an AI specialty

The Washington Post reports that AI has turned fake news into an industrial business:

Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.

I would take this with a grain of salt since NewsGuard is in the business of warning companies about fake news and protecting them from it. Still, even if there's a bit of exaggeration here the numbers are pretty startling—and fully expected. AI can automate a lot of things, and fake news propaganda is an obvious target.

One thing the article doesn't say is how extensive the readership is for these sites. However, if AI can create them, I imagine it can be trained to do SEO optimization too. Welcome to our brave new world.

15 thoughts on “Fake news is now an AI specialty

  1. bouncing_b

    Re “ if AI can create them, I imagine it can be trained to do SEO optimization too”.

    That’s what I would have thought too, but apparently it’s not so easy.
    I assumed that the LLMs would have the strongest interest in detecting and weeding out the products of AI so their own product would truly represent human language. (That was the case for their training data, all pre-LLM; what would be the result of LLMs based on an ever-larger fraction of other LLMs?).
    And they would also have the expertise to do that.
    But lots of work (e.g. detecting AI-written student papers) says this doesn’t seem to be possible. At least it produces a large enough number of false positives to not be useful or reliable.

    1. geordie

      The problem is that LLMs were trained on what did exist and so what is referenced a lot can be hard to tell from what is generated. The most commonly cited example is the Christian bible which almost always gets flagged as being written by an AI. Although I guess that could just be evidence that Asimov's story "The Last Question" is correct. Perhaps Multivac had to create this universe in order to answer the last question asked in the prior universe.

      1. bouncing_b

        Yes, but consider the question buried in my comment above:

        what would be the result of LLMs based on an ever-larger fraction of other LLMs?

        Seems like it would get ever-blander and more generic. All the edges getting more and more sanded down. Less and less real content. Maybe to one final sentence of pure boringness.

        I’d worry about that if I was an LLM developer.

  2. Art Eclectic

    Someone at Buzzfeed is already writing an article about the top 20 most outrageous AI fake news headlines.

    To be followed in quick succession by these gems:

    AI Fake News Articles We Wish Were True

    AI Fake News Articles We Wish Weren`t True

    How to Spot AI Fake News Articles

    How to Talk With Your Parents About AI Fake News Articles

  3. Austin

    Over 600 websites! On the entire internet! (Whatever happened to Kevin’s rule about how nothing really matters in a country of 330m until you get to the 10,000 or 100,000 plus range?)

  4. geordie

    The bigger issue I have found is not all the bad news articles. To a certain extent I can avoid those by getting my news from known good sites. The bigger issue I have had in the last few months is all of the gibberish books on the Apple and Amazon book stores. I do not expect trash "from" such well known brands so my filters are lower. It turns out though that at the moment they have no real incentives to get rid of the rubbish that people "publish".

  5. cld

    If the AI's work by skimming the internet and consolidating what they find, in the future, as everything they find will be written by other AI's, won't it all refine down to the same story?

    1. gs

      Isn't this:

      'skimming the internet and consolidating what they find'

      what news.google.com has been doing for some years now? The problem with any site Google manages is that Google's entire business model revolves around extracting click histories from people and monetizing them. The click history allows Google to push content to you on, for instance, news.google.com in order to keep you engaged, which skews the feed.

      1. Art Eclectic

        Google and others need a new business model.

        If I ask a question, I don't want 15 sites that might have my answer, I just want the answer. Google is going to have to sell the answer, which is a scary road because lots of people would like to manipulate that answer.

        1. jeffreycmcmahon

          Currently we live in a world in which companies are incentivized to sell products/services that are as bad as possible without crossing the line as to be totally unusable.

        2. bouncing_b

          We already have that: Bing’s chatGPT.

          The big difference between that and a Google search is that instead of pages of answer choices (where you get to exercise judgment and experience to pick the ones you want to read) you get a few-paragraph, apparently-complete and authoritative, single answer. No judgment required (or available). It’s very convenient.

          I suspect this is where all the search engines are going.

  6. gs

    My guess is that you are 100 times more likely to run into fake news content of any sort if you get your "news" on facebook or youtube or news.google.com or any other site that watches your click history and pushes content at you accordingly. If you go to a news provider that presents the same page to everyone then (hopefully) there's less fake content. If you go to, say, 10-15 varied sites of this sort and scan their "front page" then you probably have a reasonable idea what's going on.

    I have to admit that it's getting harder and more time consuming to remain an informed citizen.

  7. KinersKorner

    Can’t you just subscribe to a few Newspapers and read few magazines? I am relatively informed and do that. I also get local news like traffic and weather and what dopey Adams is saying on Radio News.

Comments are closed.