Skip to content

Instagram has a pedophile problem

The Wall Street Journal reports today that Instagram "helps connect and promote" a huge network of pedophile sites:

Pedophiles have long used the internet, but unlike the forums and file-transfer services that cater to people who have interest in illicit content, Instagram doesn’t merely host these activities. Its algorithms promote them. Instagram connects pedophiles and guides them to content sellers via recommendation systems that excel at linking those who share niche interests, the Journal and the academic researchers found.

A related report from Stanford's Internet Observatory notes that searches on Instagram for pedophile content sometimes throw up a warning—but, oddly, include an option to "see results anyway":

It's one thing to host content inadvertently. No social media platform can monitor and control 100% of its content. But the Achilles heel of these platforms—for illicit content but also more generally—is their recommendation algorithms, which have the capacity to actively promote and connect users to dangerous or addictive material. Instagram is especially culpable in this case since the Internet Observatory report notes that TikTok doesn't have a similar pedophilia problem. "TikTok appears to have stricter and more rapid content enforcement," the report says, and if TikTok can do it then so can Instagram—if they bother trying.

11 thoughts on “Instagram has a pedophile problem

  1. Jasper_in_Boston

    Observatory report notes that TikTok doesn't have a similar pedophilia problem

    Surely it can't be possible that an evil Communist Chinese company doesn't have a pedophile problem but a virtuous American company does. I reckon Xi Jinping is messing with Instagram.

  2. Jim Carey

    Start with morality - concern about the effects that your decision might have on the lives of others - and you can't go too far wrong.

    Start with cynicism - the assumption that "no one cares so why should I" - and God knows what depths of depravity you will be able to achieve.

  3. cld

    Wouldn't law enforcement be able to just buy or subpoena the profile packages social media companies cultivate and sell to advertisers?

    1. weirdnoise

      As Meta would take pains to explain, they don't directly provide user data except on a limited basis for research (remember Cambridge Analytica, anyone?) or law enforcement. What they sell is a platform for ad placement in which advertisers buy a slice of audience that Meta delineates via its algorithms. Advertisers then produce ads which they contract with Meta to display to the purchased interest segments for the purchased time and/or repetitions.

      This is how they can claim that they "don't sell user data". Google does the same. It's analogous to what is long-standing practice in publishing where advertisers buy ad spaces and then provide the "creatives" to place in those spaces. What Meta, Google (by far the biggest player in this game), and other ad brokers do is sell access to the eyeballs of users selected from literally tens of thousands of categories and subcatagories.

      What's absolutely astonishing here is an apparent gap between their ad placement system, their content search (which would help feed the former along with various interest tracking inputs -- likes, comments, etc), and their content moderation and anti-abuse team, which is likely where their anti-CSAM efforts would exist. It's a left hand not knowing what it's right hand is doing situation, given that they are a major player in the mutual anti-CSAM effort that a number of online companies participate in.

      1. cld

        They may not do it in practice, but I don't understand how they wouldn't be able to identify that slice of the audience that is looking at this material as a group in response to a subpoena or law enforcement investigation.

        Also I don't understand how advertisers are then able to find me with targeted ads if they don't know who I am.

    1. weirdnoise

      That's the thing that amazes me. Content search would seem to be able to detect possible CSAM but the anti-abuse side of the company doesn't seem to be able to act on that data.

  4. iamr4man

    “Cat and Dog torture videos litter Twitter”

    “Clemens said that she clicked on the suggested search term and a gruesome video of what appeared to be a kitten being killed inside of a blender appeared instantly. For users who have not manually turned off autoplay, the video will begin rolling instantly. NBC News was able to replicate the same process to surface the video on Wednesday.‘
    https://www.nbcnews.com/tech/tech-news/cat-dog-torture-videos-litter-twitter-adding-concerns-moderation-rcna84190

  5. Justin

    Most people would think that a bar hosting a Saturday night party for people to share kid pics should be shut down even if they also have just regular customers the other nights. Apparently this logic doesn’t apply to Instagram. Zuckerberg and his management team are, I think, criminals. For that matter, Instagram users are too.

  6. Special Newb

    They say view it anyway because it's automated. They actually don't know and legitimate content the algorithms flagged would get blocked

    Like how a discussion of human rights abuses can get flagged as torture porn.

Comments are closed.