Skip to content

Leave Section 230 alone

Today the Supreme Court heard arguments in Gonzales vs. Google. It's really Gonzales vs. YouTube, but since Google owns YouTube it gets pride of place in the lawsuit.

Long story short, the question is whether YouTube should be held liable for hosting content from Islamic State—i.e., ISIS. The Gonzales family says that YouTube's algorithms promoted ISIS content, and therefore they are partially responsible for the death of an exchange student at the hands of ISIS.

So far Google has won in lower courts thanks to Section 230, a piece of federal law that prohibits online platforms from being sued over user content they host—comments, blog posts, videos, etc. Critically, Section 230 holds online platforms harmless even if they moderate this content, as YouTube does algorithmically.

Thanks to the general dislike of large online platforms these days—Democrats don't like them because they're monopolies, Republicans don't like them because of a delusion they're anti-conservative—there's a surprising, if vague, bipartisan movement to kill or modify Section 230. But how would that work out?

Big online platforms play host to millions or billions of user comments every day. It is literally impossible to moderate all of them for all possible abuses. An algorithm can do it, but algorithms are imperfect and will never catch everything. If you remove protection from any platform that uses an algorithm—as Gonzales is asking—you would be implicitly opening up these platforms to a massive barrage of lawsuits.

Alternatively, online platforms could stop moderating user content at all. That would leave them protected by Section 230, but no one would ever use them again. An unmoderated platform would be so disgusting that it would drive every decent person away.

And it would be illegal anyway because there are a few subjects that online platforms are responsible for regardless of Section 230. The best known of these is child porn, which platforms are required to take down to the best of their ability.

Section 230, or something very similar, is the lifeblood of social media. Just about every country in the world has some version of Section 230, and algorithms are clearly the only way to effectively moderate the torrent of content on modern platforms. Like it or not, we're stuck with them. There's really no practical way around this, and they need the same protection as human moderators.

The Supreme Court should leave Section 230 alone and leave it to Congress to make changes if it desires. Stomping around in Section 230 like a bull in a china shop is the last thing we need from the court.

38 thoughts on “Leave Section 230 alone

  1. different_name

    For the serious revanchist culture warrior, killing them is a win.

    If you let normal people talk amongst themselves without what they consider appropriate supervision, people come to surprisingly normal conclusions. Like, rich people don't deserve *all* the things, routinely beating black people to death at traffic stops is wrong, that sort of thing.

    So if they can't be Truth Social, then they need to go away.

    So the question is, how confident do 5 of 9 feel today that the sheep will just keep grazing?

  2. dspcole

    But wait, isn’t the Kevin Drum blog protected by 230? Is this a conflict of interest? Can I sue Kevin if Eve’s plan doesn’t work out and doesn’t make me $12000 a month?
    I have no idea what I am talking about.

  3. painedumonde

    Very much like what will happen to the Russian Federation after Ukraine is finished with their derelict military, we (the State, the legal eagles, the public at large) are treading into territory we have no idea what result will materialize. Collapse of significant social media? Construction of a surveillance state that would make a NSA neckbeard blush? The final straw on Zuck's and Besos' backs, promoting them to allow their networks to absorb us all?

    But hey that's what America is all about.

  4. sonofthereturnofaptidude

    230 puts it all on the users and reduces the hosts to hand-wringing bystanders. What of the algorithms that amplify calls for violence and make huge profits for Google?

    The problem isn't hosting such content and not removing it; the problem is that people who favor rage-filled diatribes against out-groups are fed a steady stream of it in order to maximize revenue. Any algorithm that encourages violence for profit should be illegal.

    1. different_name

      You are completely ignoring the demand side of the equation. If you actually want to try to put 'civil' back in to civil society, your efforts are doomed until you recognize that.

      Among other things, taking your recommendation at face value just sets you up for endless whack-a-mole, hair-splitting and fights over the 1st Amendment. And what if I run my hate-emporium out of some failed country?

      1. sonofthereturnofaptidude

        Regulating business is always endless whack-a-mole. In this case, the SCOTUS might decide that an algorithm is not moderation, since it operates automatically, pushing some content and pulling other content based on previous use, consumer preferences, the profile that Google has built up of the user, etc.

        YouTube is not a publisher of content. Users upload their content and the company promotes what is uploaded to its users. Some of those users get pushed Isis snuff videos; others get otter videos. Google is treating those things as though they were effectively the same. They're not.

        Google monetizes content by selling user information to advertisers and by selling advertising. It doesn't have to be that way, and Congress has the authority to regulate that commerce. See https://www.coindesk.com/markets/2020/06/10/radical-indifference-how-surveillance-capitalism-conquered-our-lives/

  5. Solar

    I don't know the full details of the lawsuit nor the specific language used in Section 230, but at least to me, this:

    "The Gonzales family says that YouTube's algorithms promoted ISIS content"

    And this:
    "It is literally impossible to moderate all of them for all possible abuses."

    Are two very different things.

    The latter is basically a filter to remove garbage and no filter will be perfect, but the former is actually an endorsement that intentionally offers you garbage.

    Providers shouldn't be held liable for the content users create, and as long as they do an honest effort to moderate things they should be protected from lawsuits.

    However, once the algorithm is actually promoting specific content, that is no longer just on the users since the company is actively suggesting you watch certain content. They are no longer a passive host of content.

    1. chaboard

      I don't know the details either but I'm guessing the plaintiffs don't know the algorithms either and are using 'promote' in the vague sense of 'these are the videos suggested to the user'.

      Which, if you assume more than 1 video made it through the garbage filter and the user 'liking' one will cause the algorithm to show him more - not because the algorithm is 'promoting' ISIS, but because the algorithm shows you things like what you like and you unfortunately liked some garbage that made it through the filter.....

    2. kennethalmquist

      Social media companies are not passive hosts of content. When you log into a social media site, the site won't show you every bit of content on the site--that would be impossible. And it won't show a random selection of the content. It will show you content that the site's algorithms select to maximize engagement.

      There is no universally agreed upon definition of “garbage.” Consider the use of profanity. A site could ban all posts containing profanity. Alternatively, it could say to users, “sure, you can use profanity, but if you do that, your posts won't be seen by users who prefer a profanity-free experience.” I think that the latter might be a better approach. But because filters are imperfect, material being promoted as profanity-free might actually contain profanity. More likely, it will be profanity-free, but will be objectionable in other ways. For example, it might be ISIS recruitment material. When that happens, that doesn't mean that the company “intentionally offers you garbage,” it just means that their filters are imperfect.

      In short, social media companies use algorithms both to identify posts to ban from the platform completely and to identify posts to present to particular users. If you modify section 230 to protect the former but not the latter, that's going to affect how social media companies operate. Before doing this, it's important to figure out how social media companies will change in response, and decide whether that would be an improvement.

  6. Austin

    "Section 230, or something very similar, is the lifeblood of social media. Just about every country in the world has some version of Section 230..."

    Most countries in the world do NOT have a version of Section 230 codified into law. This is really obvious in the countries that do not have any general freedom of speech (China, the countries of the Middle East, etc.) but it's also obvious in efforts of the EU to promote privacy rights online.

    Below is a short article I found in a quick Google search stating that basically the entire world's social media usage is dictated by Section 230 because most of the social media giants themselves are headquartered in the US and/or they store the content that other countries wouldn't want posted on US-based servers. And then other social media giants that are headquartered outside the US have to compete with the US allowing basically everything and anything online with no liability for the companies hosting the content... and lobby their own governments to allow them to host similar content and/or avoid liability for content too... or lobby their own governments to block the US social media companies (in effect creating a captive user base unable to switch social media provider easily or legally). Feel free to do your own research on the topic, Kevin.

    But make no mistake about it, it's not that the other countries "chose" to implement a version of Section 230 so much as it's just that the US got to the internet first and set up content-hosting rules that foreign social media competitors have had to figure out a way to compete with.

    https://www.codastory.com/authoritarian-tech/global-consequences-section-230/

  7. Dana Decker

    Not sure about " It is literally impossible to moderate all of them for all possible abuses"

    A platform could hire one moderator for every 300 users. That would probably reduce harmful content to near-zero. It would be hugely expensive, but it's not "literally impossible".

      1. RiChard

        Quadrupled, by the time you figure in leave, training, etc. Also I come up with ~3.3m moderators for every billion users, just sayin'.

  8. Brett

    It depends on the platform. Twitter without Section 230 would basically have to eliminate both recommendations and the "For You" feed, and then essentially leave curation up to individuals using "mute and block" tools and the reverse-chronological feed. The individual person's Twitter experience in that situation could be okay, but the overall site would be a sewer of legal spam and abusive stuff that doesn't cross the line into illegal content.

  9. Ogemaniac

    Without Section 230, this comment would not exist. Nor would your reply, nor any small comment board like this. Kevin could not take the risk of being sued for your comment, or mine.

  10. shapeofsociety

    Fortunately, it looks like you are likely to get your wish. The Justices were clearly very skeptical of the Gonzales family lawyers' arguments and seem overwhelmingly inclined to keep Google immune unless Congress wants to change the law and spell out some more ways that platforms can be held liable.

  11. kahner

    i haven't dug into the details of 230 language or related case law, but if it doesn't already exist, it seems some sort of reasonableness standard could and should be implemented. if you can prove a company didn't make a reasonable effort to moderate and remove certain types of content, or knowingly allowed it then the company could be held liable for damages. a blanket amnesty for liability on any user generated content on a company's platform is not necessary for the internet to function.

  12. Justin

    Well, it’s too late anyway. The damage is done. Still, I’d love to get revenge on the likes of Alex Jones and all the rest of the degenerates in social media. They are happy to profit from mass murder. I’d be happy if they were on the receiving end of ISIS style justice.

  13. kenalovell

    Do Supreme Court justices use Twitter and the Goggle? They might need a crash course in 'Social Media: Its origins and evolution in contemporary society' before they can understand the issues before them.

    Perhaps courts should be able to require website owners to disclose all the information they have about a user's identity if someone can show probable cause to sue them. The level of online discussion might improve remarkably if people posting defamatory tripe knew they were at risk of losing their anonymity.

  14. D_Ohrk_E1

    An unmoderated platform would be so disgusting that it would drive every decent person away.

    And yet, you and most people are still on birdsite. How's that going?

    1. Rattus Norvegicus

      It's a hellhole, but the network effects are strong. At least Dinesh D'Souza and Charlie Kirk aren't showing up as much in my feed as they were a few weeks ago.

  15. J. Frank Parnell

    Letting nine people whose knowledge of the internet and algorithms is minimal make the law? That makes about as much sense as letting nine people with a minimal knowledge of mathematics or statistics decide whether a redistricting plan is gerrymandering. Or nine people with a minimal knowledge of physics or biology decide whether extending the detectable range of emitted radiation beyond the normal limit of the human eye constitutes an invasive search. We really could use some justices with a background in science and tech in addition to those who received humanity or social science degrees before they attended Harvard/Yale Law School.

  16. gooner78

    For a glimpse of what our digital future will be, one needs look no further than the writing of Neal Stephenson. From uncanny predictions of the emergence of massive multiplayer online gaming (Snow Crash -1992) to the evolving shift back toward symbol based writing (aka. emojis) (Anthem - 2009) to the dissolution of social media (and social fabric) (Fall, or Dodge in Hell - 2019), he has consistently and accurately previewed where we are going. Not always pretty, but also not without solutions.

  17. Jasper_in_Boston

    The Supreme Court should leave Section 230 alone and leave it to Congress to make changes if it desires. Stomping around in Section 230 like a bull in a china shop is the last thing we need from the court.

    This 1000%. That goes for about 99% of the time Congress has the opportunity to wade into complex policy debates. They're just not expert enough. Let the legislative branch do this sorta thing.

    Having said that, if Congress does want to take a look at Section 230, while I'd urge caution and mostly inaction, I do think there's one sub-issue that perhaps deserves a look: virality. It is this quality, in the main, that people are talking about when they discuss the dangers and the harm of social media.

    If the member of a right wing militia urges via social media that the houses of liberals be firebombed, it's not necessarily a big problem. We're a large country, and we're inevitably going to see examples of extremists and cranks. But it may well become a problem is that person's Tweet or video or Facebook post gets retweeted or forwarded three million times.

    Virality is really the challenge.

    So, the change I would like to see is: Section 230 remains intact except for algorithmically induced virality. As a general rule, Facebook and Tiktok and Twitter and ISPs and the Washington Post comments section shouldn't be held liable for the content placed on those platforms by users. Agreed! This would destroy internet usability and social media as we know them.

    But once the platforms decide to chase profits by making user content go viral, they should lose the liability shield. This doesn't mean they'd have to give up trying to induce virality; they'd simply have to get serious about policing this one aspect of their businesses (or, if they want to take a laissez-faire approach, be prepared to lawyer up from time to time).

    I believe this one tweak would preserve the vast bulk of what people enjoy and find useful online, while significantly reducing the harm.

    1. Jasper_in_Boston

      That goes for about 99% of the time Congress has the opportunity to wade into complex policy debates.

      erm, that should've been "...the Supreme Court has the opportunity..." I have no problem with Congressional formulation of policy detail. That's their job!

  18. Crissa

    Recommendations really shouldn't be covered under 230 unless the users created the tags it's sorting by.

    Google made the tags, Google should be responsible for them. Why would they suddenly be not responsible when they're basically playing ransom note with videos?

Comments are closed.