Skip to content

Here’s how Facebook responded to toxic content

I'm not sure how the consortium assigns different Facebook topics to different newspapers, but apparently the Washington Post won the topic of how Facebook's news feed algorithm has been tweaked over the years. This is finally something interesting to a numbers nerd like me!

The core of the story is Facebook's decision in 2017 to give users more options for responding to a post than a simple Like or Dislike. The new options were emojis for “love,” “haha,” “wow,” “sad” and “angry,” and they were weighted much more strongly than Likes. This was all part of Facebook's effort to highly weight posts that generated interactions of any kind, since that produced ongoing conversations that kept people engaged on the site.

Needless to say, it's the "angry" emoji that caused all the trouble. But the story here is pretty interesting. First, there's this:

It was apparent that not all emotional reactions were the same. Anger was the least used of the six emoji reactions at 429 million clicks per week, compared with 63 billion likes and 11 billion “love” reactions, according to a 2020 document.

Yowza! The "angry" emoji is little more than background noise. Still, Facebook engineers recognized a problem:

Facebook’s data scientists found that angry reactions were “much more frequent” on problematic posts: “civic low quality news, civic misinfo, civic toxicity, health misinfo, and health antivax content,” according to a document from 2019. Its research that year showed the angry reaction was “being weaponized” by political figures.

In 2018 Facebook downgraded the importance of the "angry" emoji. In 2019 they tweaked the algorithm to demote content that was receiving an excessive number of angry responses. In 2020, as evidence continued to flood in, the "angry" emoji was downgraded again, along with a couple of other emojis. Finally, a few months ago, it was downgraded to zero. In addition to the political weaponization of the "angry" emoji, Facebook discovered that users didn’t like it when their posts received angry reactions. And in the end, it didn't cost Facebook anything:

When Facebook finally set the weight on the angry reaction to zero, users began to get less misinformation, less “disturbing” content and less “graphic violence,” company data scientists found. As it turned out, after years of advocacy and pushback, there wasn’t a trade-off after all. According to one of the documents, users’ level of activity on Facebook was unaffected.

So what do we take from this? Giving users additional ways to respond to posts isn't a bad thing. Experimenting with the news feed algorithm isn't a bad thing. Trying to promote conversation isn't a bad thing. Responding to the ill effects of an emoji isn't a bad thing. And eventually killing it altogether isn't a bad thing.

But. It seems as if the thing that finally caught Facebook's attention wasn't the effect of the "angry" emoji on disinformation or conspiracy theory swill. It was the fact that users didn't like it and it didn't seem to be working anyway. If that hadn't been the case, would Facebook have done the same thing just for the sake of being a good corporate citizen? It seems unlikely.

POSTSCRIPT: Is Facebook less toxic now that the "angry" emoji isn't used to weight posts? We need research! And this kind of research seems like it could be done even without Facebook's cooperation. Let's get cracking, grad students.

17 thoughts on “Here’s how Facebook responded to toxic content

  1. Krowe

    Interesting. I usually use the "angry" emoji when someone posts a story documenting the atrocities of the modern Republican party. The friend/poster and both know I'm angry about the story, not the post or the poster. But with posts with large numbers of respondents, who can tell what an emoji intends?

  2. Doctor Jay

    I think one of the fundamental problems with social media and "the algorithm" in general is that there is no way to signal "Don't show me more of this!"

    If you clicked the "angry" emoji, before they set it to zero, that meant you'd get MORE content that was similar, not less.

    No wonder it was a problem for the users. And not just the users whose content got marked "angry".

    The business plan dictates that you want more engagement at all times. Therefore there is no way to tell them "stop showing me this crap!" other than ignoring it and hoping it will go away.

    That happens to me in all kinds of places, but intermixed in a feed that's supposed to be my "friends", where it is intimate and personal? That's intolerable, and one reason I avoid Facebook.

    1. Austin

      There are three dots on the upper right corner of every post, ad, whatever on Facebook. Options include Hide Post/Ad, Snooze the person who posted it, Unfollow the person who posted it, Report Post/Ad for whatever turned you off about it. Allegedly, if you use those options, you get less or no further posts/ads from those people/companies. I tend to use the three dots and I never have seen posts/ads from those people/companies again... so it seems to work.

      A problem is that users are generally lazier than me in their social media consumption and don't use the three dots anywhere nearly as frequently as I do... so they're at the mercy of the algorithms. Which really isn't any different than anything else in capitalist society: if you don't ignore the salespeople clamoring for your attention and the people with clipboards at the mall who "just want to talk a minute," you'll quickly be awash in lots of unsolicited verbiage mostly flowing from them to you... and your likelihood of making poor decisions goes up.

  3. Matt Ball

    Good discussion by the Pod Save America guys on today's podcast. How Facebook will serve terrible information to fake accounts just as a matter of course.

    1. OverclockedApe

      Thank you, was trying to remember the details, and this is just the most recent case of FB killing access to researchers.

        1. OverclockedApe

          I'm not sure he to that point yet, I kinda think he's on the autism spectrum more than the narcissist/psychopath one. That said I can see him going Putin down the road depending how things pan out/

  4. D_Ohrk_E1

    You need more research, really?

    How is it that Facebook can immediately self-censor when countries like Vietnam demand it, but when it comes to hate posts, they suddenly are facing a complicated issue that must be resolved by constantly reviewing its processes and needing leadership by Congress?

    Facebook can stop the spread of hate if it was incentivized, full stop.

    1. Justin

      Can it?

      Facebook’s own admission showed that it removed nearly 5.4 million pieces of content related to child sexual abuse in the fourth quarter of 2020. On Instagram, this number was at 800,000.

      1. D_Ohrk_E1

        You're referring to Section 230 things. IOW, FB is not incentivized to effectively block such things as they can't be held responsible.

        When a country tells them they have to censor X, they suddenly have the capacity to enforce a total block.

        Existential risk does not exist so long as Section 230 remains as written on the books.

  5. Justin

    It should be obvious to any casual observer that Facebook etc. are both toxic and a threat.

    American-based social media companies have become active players in digital war, both by accident of design and a subsequent failure to address the threat due to concerns over profits. Discussions about the negative role of social media in society generally address the myriad problems wrought by social media, including electoral manipulation, foreign disinformation, trolling, and deepfakes, as unfortunate side effects of a democratizing technology. This article argues that the design of social media fosters information warfare. With its current composition and lack of regulation, social media platforms such as Facebook and Twitter are active agents of disinformation, their destructive force in society outweighing their contributions to democracy. While this is not by deliberate design, the twin forces of capitalism and a lack of regulation of the world’s largest social media platforms have led to a situation in which social media are a key component of information war around the globe. This means that scholarly discussions should shift away from questions of ethics or actions (or lack thereof) on the part of social media companies to a frank focus on the security risk posed to democracy by social media.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7212244/

Comments are closed.