Skip to content

Here’s how AI disinformation works in the real world

This is from the Stanford AI Index Report:

Slovakia’s 2023 election illustrates how AI-based disinformation can be used in a political context. Shortly before the election, a contentious audio clip emerged on Facebook purportedly capturing Michal Šimečka, the leader of the Progressive Slovakia party, and journalist Monika Tódová from the newspaper Denník N, discussing illicit election strategies, including acquiring voters from the Roma community.

The authenticity of the audio was immediately challenged by Šimečka and Denník N. An independent fact-checking team suggested that AI manipulation was likely at play. Because the clip was released during a pre-election quiet period, when media and politicians’ commentary is restricted, the clip’s dissemination was not easily contested.

The clip’s wide circulation was also aided by a significant gap in Meta’s content policy, which does not apply to audio manipulations. This episode of AI-enabled disinformation occurred against the backdrop of a close electoral contest. Ultimately, the affected party, Progressive Slovakia, lost by a slim margin to SMER, one of the opposition parties.

I had never heard of this. But if it can happen in Slovakia, it can happen here. There's also this:

In 2023, case studies emerged about how AI could be used to automate the entire generation and dissemination pipeline. A developer called Nea Paw set up Countercloud as an experiment in creating a fully automated disinformation pipeline.

As part of the first step in the pipeline, an AI model is used to (a) continuously scrape the internet for articles and automatically decide which content it should target with counter-articles. Next, another AI model is tasked with (b) writing a convincing counter-article that can include images and audio summaries. This counter-article is (c) subsequently attributed to a fake journalist and posted on the CounterCloud website. Subsequently, (d) another AI system generates comments on the counter-article, creating the appearance of organic engagement. Finally, an AI searches X for relevant tweets, (e) posts the counter-article as a reply, and (f) comments as a user on these tweets. The entire setup for this authentic-appearing misinformation system only costs around $400.

Hmmm. If you can do this for $400, just imagine what you can do for $4,000 or $400,000.

8 thoughts on “Here’s how AI disinformation works in the real world

  1. zic

    Recently, a paper mill in a nearby town had a release of what's called 'black liquor' in the Kraft paper-making process; essentially the plant sugars mixed with the caustic pulping liquor. It caused brown snow which was caustic but not toxic.

    AI seems to have been unable to determine the difference between toxic black liquors, created by chlorine bleaching and non-toxic black liquors. It caused a panic amongst people who searched to find out if they were in danger and should evacuate the area of the mill.

    Of note was that the local news reporters, while it didn't repeat the AI misinformation, and failed to correct it, and further inflamed the angst of residents around the mill.

    Having reported on the industry for many years, I knew the quick google AI answer was incorrect, and so looked at the documents the AI was using to generate the answers. They were in the document, but it was written so that it took some intelligence and discernment to parse the difference between the bleaching methods use to make the pulp.

    (For what it's worth, I do not believe any paper mill in the US used chlorine bleaching any longer; and if I trusted AI, I would ask it the answer; probably something in a clean-air act rule; but also because oxygen bleaching is cheaper.)

  2. Dana Decker

    We are rapidly approaching the point where no information can be trusted unless it comes from, er, trusted outlets. That is, nothing posted on social media, nothing from non-mainstream entities. Stifling though it might be, a return to the information dynamics of the previous century may be the only option in the short run. In the U.S. that means: only the big three networks & PBS/NPR, a revival of newsweeklies (like Time), and selected Big Name newspapers. The Establishment, in other words. Establishment media made big mistakes (e.g. mostly pro Iraq War) but bad as it was, it could be the best we can hope for right now.

    Maybe NewsGuard, Snopes, FactCheck, and others can be integrated in publications that aspire to be trusted, (like a virus scan on downloaded software). Whatever solution emerges, it will no doubt be awkward and cost money.

    Of course, out in MAGA-land, any attempt to bring reality into their lives is doomed. They know the Truth, and that's that.

    Looking at the diagram, it looks like Social Media is the source of our misfortune.

  3. name99

    "But if it can happen in Slovakia, it can happen here."

    Why? In particular we don't have idiotic pre-election quiet periods that will limit the debate. Oh, we might have technical versions of these relevant to TV or whatever, but nothing that matters on the US internet.

    I've no idea what happens on Meta, but what will happen on X is that within 24 hrs people of remarkably diverse skills and backgrounds will investigate the issue and provide some sort of analysis.
    Look at that supposed AI disinformation machine; it seems to think all the magic that matters is creating fake headlines (which can then be linked to on X). Uh, no-one who matters on X GIVES FSCK about headlines precisely because they're under the control of people (left, right, I don't care) who don't care about TRUTH.

    ULTIMATELY what's going on here is mainly a deeply-seated unhappiness in the population that USED TO be able to control elections (through newspapers and TV) at the loss of this power. They might not have directly controlled whether party A or party B got into power, but they did control the Overton Window and what got discussed. The whole Rotherham affair is a perfect example of the last gasps of this system.

    The new system allows everyone a voice.
    You may not like everyone having a voice, that's fine. But then STFU about how you're a democrat, admit what you really are.
    This is what everyone having a voice LOOKS LIKE, the "marketplace of ideas" that people claim they want, until they encounter ideas they don't like.

    At the end of the day, the people making these complaints are the exact same people who, twenty years ago, were crying endlessly about the importance of money in politics -- right up until the D's started raising more money than the R's at which point this became an uninteresting and unimportant issue. They've totally burned their credibility.

    For a different version of the same point check out
    https://x.com/i/bookmarks?post_id=1874952833818140678
    49 vs 77 since 2011...
    And yet Kevin was happy to spread his version of the disinformation just two days ago!

    Yeah, this is why no-one who cares about truth is especially interested in these endless complaints about "disinformation"...

    1. jeffreycmcmahon

      I can't think of a nice way to put this, but you're a fool and you're making the world a worse place for everyone by being stupid and unable to discern the stupidity you're spreading. You should be ashamed of yourself. Go sit at the kids' table and stop pretending to be a grownup.

    2. Larry Jones

      name99:
      Your straw man Democrat/liberal who wants to silence ideas breaks down when you consider that a.) the headline is all that matters to the vast majority of viewers, and b.) in your freewheeling "marketplace of ideas" it is the most popular posts that win the day, not the most accurate.

      Some social media mavens need to STFU.

  4. D_Ohrk_E1

    This may be the last opportunity for laws requiring software/AI to produce embedded digital watermarks, before all hell breaks loose and the oligarchs maintain a hold on countries, you know, in the post-post-capitalist world.

Comments are closed.