I'm not sure how the consortium assigns different Facebook topics to different newspapers, but apparently the Washington Post won the topic of how Facebook's news feed algorithm has been tweaked over the years. This is finally something interesting to a numbers nerd like me!
The core of the story is Facebook's decision in 2017 to give users more options for responding to a post than a simple Like or Dislike. The new options were emojis for “love,” “haha,” “wow,” “sad” and “angry,” and they were weighted much more strongly than Likes. This was all part of Facebook's effort to highly weight posts that generated interactions of any kind, since that produced ongoing conversations that kept people engaged on the site.
Needless to say, it's the "angry" emoji that caused all the trouble. But the story here is pretty interesting. First, there's this:
It was apparent that not all emotional reactions were the same. Anger was the least used of the six emoji reactions at 429 million clicks per week, compared with 63 billion likes and 11 billion “love” reactions, according to a 2020 document.
Yowza! The "angry" emoji is little more than background noise. Still, Facebook engineers recognized a problem:
Facebook’s data scientists found that angry reactions were “much more frequent” on problematic posts: “civic low quality news, civic misinfo, civic toxicity, health misinfo, and health antivax content,” according to a document from 2019. Its research that year showed the angry reaction was “being weaponized” by political figures.
In 2018 Facebook downgraded the importance of the "angry" emoji. In 2019 they tweaked the algorithm to demote content that was receiving an excessive number of angry responses. In 2020, as evidence continued to flood in, the "angry" emoji was downgraded again, along with a couple of other emojis. Finally, a few months ago, it was downgraded to zero. In addition to the political weaponization of the "angry" emoji, Facebook discovered that users didn’t like it when their posts received angry reactions. And in the end, it didn't cost Facebook anything:
When Facebook finally set the weight on the angry reaction to zero, users began to get less misinformation, less “disturbing” content and less “graphic violence,” company data scientists found. As it turned out, after years of advocacy and pushback, there wasn’t a trade-off after all. According to one of the documents, users’ level of activity on Facebook was unaffected.
So what do we take from this? Giving users additional ways to respond to posts isn't a bad thing. Experimenting with the news feed algorithm isn't a bad thing. Trying to promote conversation isn't a bad thing. Responding to the ill effects of an emoji isn't a bad thing. And eventually killing it altogether isn't a bad thing.
But. It seems as if the thing that finally caught Facebook's attention wasn't the effect of the "angry" emoji on disinformation or conspiracy theory swill. It was the fact that users didn't like it and it didn't seem to be working anyway. If that hadn't been the case, would Facebook have done the same thing just for the sake of being a good corporate citizen? It seems unlikely.
POSTSCRIPT: Is Facebook less toxic now that the "angry" emoji isn't used to weight posts? We need research! And this kind of research seems like it could be done even without Facebook's cooperation. Let's get cracking, grad students.