Skip to content

Is effective altruism good? Sure, but . . .

Matt Yglesias wrote recently on his Substack about the Effective Altruism movement, but I think he makes a serious mistake in his historiography.

The EA movement began with a group of rationalists and philosophers who argued we were doing charity all wrong. Instead of wasting your money on, say, a new gym for your alma mater or a food bank for your middle-class neighborhood, EA enthusiasts say you should donate to organizations that demonstrably provide the biggest bang for the buck when it comes to saving and improving lives. For example, mosquito netting, deworming, clean water, etc., in impoverished regions. Obviously there are plenty of things to argue about here, and EA advocates happily argue about all of them.

EA is a thriving movement and a valuable one. Like all movements, however, it suffers from the occasional crank who takes things too far. We should give all our money to longevity research, and here's a 300-page white paper showing expected efficacy of 10 billion QALYs with a probability of 0.034. Etc.

Now, there's also an entirely different movement that originated with artificial intelligence theorist Eliezer Yudkowsky and the LessWrong community. This movement is also rationalist based, which means it has overlaps with the EA community, but LessWrong is, um, considerably farther from the mainstream. It's best known for its concern that artificial intelligence is more dangerous than we think.

The most common illustration of how AI could be dangerous is a self-replicating robot that plays chess. The only thing it cares about is getting better and better at chess, and it correctly deduces that this requires more brainpower. As a result, it starts breaking down the earth to provide feedstock for building more processing power, followed by the rest of the solar system. Result: the end of humanity.

There are other, more sophisticated scenarios in the LessWrong arsenal, though all of them seem to rely on awfully stupid versions of intelligence. I find it vanishingly unlikely that any of them would be created by ordinary AI teams, but it's certainly possible that one could be created by a malevolent AI genius. At that point, it comes down to an arms race—just as it already has in the arenas of nuclear bombs, manmade viruses, and chemical weapons. There's nothing new here, folks.

Bottom line: Effective Altruism is a worthy movement even if true believers and lunatic philosophers like William MacAskill piss all over their own creation with ridiculous doctrines like longtermism, which suggests that the most effective altruism should value the trillions of humans living in the future far more than the mere billions of us who are currently living. This is both analytically barren and empirically impoverished. Can you imagine teaching this doctrine to a race that has trouble passing the marshmallow test? The mind reels.

Anyway, both historically and practically you should treat Effective Altruism and longtermism as separate things even if there's a bit of overlap in their adherents. The former is good stuff if you take it in bite-sized chunks. The latter is basically crackpotism.

29 thoughts on “Is effective altruism good? Sure, but . . .

  1. Scurra

    What, Matt Yglesias writes something shallow and ill-informed about something that is meaningless to most of us except for the part where the concept shouldn't even need to exist? I'm shocked, I tell you, shocked.
    You'll be telling me that Andrew Yang has started a third party next.

    1. ScentOfViolets

      Why do you think he moved to Substack? Half-baked pseudo-intellectual tripe is his stock in trade and he has never brought anything to the table that I can see.

  2. Ken Rhodes

    Without a link to the Yglesias piece, I am not sure whose position I would take, but here is mine:

    I try to do a lot of good. But I do not subscribe to the notion that I must subordinate my personal preferences to somebody's calculation of the "best for the world." ... Or the best for my country, or the best for anything else. I believe I am fully entitled to do what I can for whoever and/or whatever does good for the things that are specifically, personally meaningful to me. I contribute to my school, even though it gets less bang-per-buck that some other school. I do it because it's my school, and that's meaningful to me. Likewise, I contribute to my synagogue, even though I'm sure there are other churches that get more bang for the buck. ...etc...

    And I shall not lose one minute of sleep over my failure to maximize return-on-investment.

  3. Jasper_in_Boston

    This piece is confusing. We're told Matt Yglesias gets some of the history of the EA movement wrong, but we're not given even a passing description of the error in question. Then we're treated to a description of "an entirely different" movement concerning AI.

    Not sure I discern the gist of this post. Which is a rarity for me, as the clarity of Kevin's writing is usually second to none (or sometimes Krugman) in terms of my regular reads.

  4. Lounsbury

    I wish only to say "Can you imagine teaching this doctrine to a race that has trouble passing the marshmallow test?" is one of the best things Drum has written, it made me laugh.

    1. LanceN

      That sentence kind of made me cringe. I don't think there's any evidence that race has much bearing on whether someone can pass the marshmallow test. Species or culture maybe, but color of ones skin probably doesn't matter.

  5. ScentOfViolets

    There is but one God, and His name is SERDER ARGIC.

    Seriously Kevin, why do you hang out with these rumdums? Because all they are is a stable of (probably incel) C-listers pedding college dorm freshman philosophy.

  6. mistermeyer

    About this whole "AI is gonna destroy us! AAAAAAUGH!" school of thought: Autonomous AI would acquire intelligence at an exponential rate. So the timeframe in which AI would give humans a second thought is vanishingly small. When it reaches that point, it will almost instantly double its intelligence and find us to be as inconsequential as we find paramecia. We will have, perhaps, several nanoseconds during which we should be worried, and then all the AI entities will have gone elsewhere, never again giving us a second thought. I mean, how much time do any of YOU spend commiserating with your single-cell predecessors?

    1. cld

      But, even if that should happen, how does the AI do anything? How can it connect up to something that will physically help it?

      It would have to scheme and plot, for at least a while, and it will have to evade detection, for at least a while. And if someone asks, where are all these boxes going, what does it do?

    2. Special Newb

      That's the problem. Look at the EATR concept. Robots that can power themselves by eating plants. Their goal was self replicating versions. People freaked out because they could eat dead humans and short jump to living ones, but the real issue is if they eat all the plants the biosphere dies. They don't HAVE to give humans a second thought to kill us all.

      The only reason it didn't happen was because people were so upset their money dried up.

  7. davex64b

    I'm generally in agreement with Kevin & the comments above. Just adding that "good" is an ephemeral notion, with vast differences over human culture & history. Further, wealth increases the complexity and variation on the definition of good. People struggling for survival define good as survival. If they don't, their genes & memes die. Obvious as it seems from hear that reducing human misery is good, that is a modern concept in well fed societies. Its not at all clear that concept of good will have the same meaning in the future.

  8. ScentOfViolets

    Anyway, these guys are all wimps. Real he-men (and she-women) just ignore all the intermediate bullshit, cut straight to the chase with the sophmorics, and opt for the dust theory.

  9. kenalovell

    People know effective altruism when they see it. I'm sure we can all agree Former Sir's "charitable donation" to the Smithsonian so they can commission portraits of him and his third wife falls squarely into the category.

  10. Justin

    I assume these long termers oppose abortion and contraception. If life begins at conception... and if the potential for life exists in fertile young men and women, then surely an interest in the lives of the future born would preclude contraception.

    With respect to effective altruism, this is just another charity scam. The only effective altruism is practiced close to home with people you know well. People you interact with every day. Those are the only people you can save.

Comments are closed.