Skip to content

AI “incidents” are increasing

According to the AI Incident Database, the number of reported "harms or near harms" caused by the deployment of AI is increasing:

These incidents include such things as AI generated nude images of Taylor Swift, unsafe autonomous cars, and privacy concerns over romantic chatbots. On the other hand, the database also includes an incident of high school cheating, a Google Maps error, and a legal case against TikTok, all of which are AI related only by a hair. So take this with a grain of salt.

But don't ignore it completely. A lot of the incidents are genuinely disturbing, and it's pretty obvious that they're only going to increase over time.

21 thoughts on “AI “incidents” are increasing

  1. Massive Gunk

    Shouldn’t the base rate be acknowledged? Isn’t the increase in incidents only meaningful when weighed against the increase in usage?

    1. azumbrunn

      It depends what you want to measure: If you want to find out how error-prone AI is then you measure number of incidents divided by amount of usage (however you measure that--a whole different question).
      If you however are interested in the danger to the population then you do what Kevin did.
      This is going to be a serious concern in fairly short order if the AI crowd achieves the growth they are predicting.

      1. Massive Gunk

        Thank you for explaining. I agree it's important. I would be interesting to know if the rising number of incidents reflects its widespread use and not necessarily an increase in the danger it poses. Ie. if AI is scaling up reliably or if it’s creating bigger dangers as it expands. Thanks,

  2. Bluto_Blutarski

    217 total incidents seems like... not a lot.

    I don't know much about online porn (honest I don't), but I would have thought that fake Taylor Swift nudes would account for at least 200 incidents just by themselves?

  3. golack

    Do they use AI to scour the web for incidents of AI misbehavior?

    Follow the link to the site--it's done in partnership with UL, not just for appliances anymore, and Northwestern (among others). People are not required to report incidents, so absolute numbers do not mean much--but it's a start.

  4. Salamander

    Well, I'm still creeped out by the image of Mr Drum unbuckling his seat belt, crawling across the front seats, opening the passenger side window, leaning out with his camera, and taking a few shots, probably after messing with aperture speed, focus, etc etc, all as his car sped, driverless, down the freeway at high speed.

  5. mcbrie

    The one "incident of high school cheating" is clearly a stand-in for ALL the AI cheating, middle school, high school, college, and grad school, which is on an epic scale. Cheating has obviously been around forever, but AI makes it much faster, easier, and harder to catch and prove. For those of us who teach, and who depend upon students writing papers outside of class as a way to develop their skills in researching, writing, and thinking, this is an existential threat. But term papers are just the tip of the iceberg. Online coursework can't be trusted at all anymore, for example. Lots of teachers, and even entire schools and disciplines, are basically surrendering to the robots out of fear of lawsuits from angry parents.

  6. James B. Shearer

    "... For those of us who teach, and who depend upon students writing papers outside of class as a way to develop their skills in researching, writing, and thinking, this is an existential threat. ..."

    If you can't tell the difference these skills are now useless like calligraphy.

      1. James B. Shearer

        "The problem is: You can recognize it but you can't prove it."

        Then mark it down as "mechanical" or "trite" or something. Do you worry about kids having people doing their homework for them which has been around forever? And there are still in class exams right?

        1. mcbrie

          Azumbrunn is exactly right. It's not that hard to recognize once you know what to look for, but proving it is nearly impossible. If a student copies from Wikipedia, that's an easy case. Side by side comps, flags on Turnitin, etc. But AI answers (a) aren't in the public domain, (b) adapt to any oddity you can put in the prompt, and (c) always come out different. I can run AI models of my questions before I grade an essay but what I get back will never be exact enough for a disciplinary board or grade appeal. You can downgrade them ("trite"), but AI papers have gotten quite good. Soulless, but good. So maybe you give the kid a B. That's still a B for zero work and possibly a better grade than that kid could have done on their own. IF you catch them! In class exams are still useful, but there are skills you get from writing an outside paper you can't get sitting for a timed in-class test.

          1. James B. Shearer

            "...but AI papers have gotten quite good. Soulless, but good. So maybe you give the kid a B. That's still a B for zero work and possibly a better grade than that kid could have done on their own. ..."

            So you can now get pretty good answers with no work, answers that are often better than a kid can produce on his own. Then that kid is just wasting his time taking the course. They should be taking a course on how to best use AI assistants.

    1. Joseph Harbin

      If "thinking" is one of the "skills ... now useless like calligraphy," that's a pretty damning indictment.

      On the other hand, that may be as good as any explanation the rise of Donald Trump.

  7. Bluto_Blutarski

    Interesting story from a teacher about a student who presented a paper stating that the Greek language was a mix of four other languages.

    She corrected it, and the student complained. He had found the answer online using AI.

    She explained that the AI was wrong. His response: "Am I going to believe you, a teacher earning minimum wage, or a tech that people have spent billions of dollars developing? Why don't you just admit you didn't know this, and give me an A?"

    This... is the future.

    1. Crissa

      That's uhh... interesting logic that young person has.

      Teachers don't get paid minimum wage, and are hired on their skills to determine when things are wrong or not. Sometimes they're wrong.

      The billion dollars was to make a trillion random answers, not specifically his answer, so his answer was worth even less than the teacher's.

      1. painedumonde

        A billion dollar scam to mislead an entire generation into believing the regurgitation of all of humanity's thoughts, whether deviant, enlightened, or altogether banal as truth or even palatable. That is the real problem, the scientific method, or any method for that matter (even cults have some sort of internal logic), has been discarded.

        Computer says noooo.

  8. name99

    "AI generated nude images of Taylor Swift"

    You don't need AI to do this. Back in my day in the late 1990s you could do it just fine with Photoshop.
    Calling this an AI issue is basically the same scam as the invention of the term Weapons of Mass Destruction -- it's a way to direct nuclear-level outrage at trivial chemical-weapons level threats.

    And if you play along, then you don't get to complain when it's used in a way you don't like, as it certainly will be. GWB used a mendacious term that people like you accepted to justify the attack on Iraq. How you gonna feel when Vance uses the argument of "malicious AI" to justify an attack on, I don't know, South Africa, because their government used AI to create a bunch of memes on Twitter?

Comments are closed.