Matt Yglesias wrote recently on his Substack about the Effective Altruism movement, but I think he makes a serious mistake in his historiography.
The EA movement began with a group of rationalists and philosophers who argued we were doing charity all wrong. Instead of wasting your money on, say, a new gym for your alma mater or a food bank for your middle-class neighborhood, EA enthusiasts say you should donate to organizations that demonstrably provide the biggest bang for the buck when it comes to saving and improving lives. For example, mosquito netting, deworming, clean water, etc., in impoverished regions. Obviously there are plenty of things to argue about here, and EA advocates happily argue about all of them.
EA is a thriving movement and a valuable one. Like all movements, however, it suffers from the occasional crank who takes things too far. We should give all our money to longevity research, and here's a 300-page white paper showing expected efficacy of 10 billion QALYs with a probability of 0.034. Etc.
Now, there's also an entirely different movement that originated with artificial intelligence theorist Eliezer Yudkowsky and the LessWrong community. This movement is also rationalist based, which means it has overlaps with the EA community, but LessWrong is, um, considerably farther from the mainstream. It's best known for its concern that artificial intelligence is more dangerous than we think.
The most common illustration of how AI could be dangerous is a self-replicating robot that plays chess. The only thing it cares about is getting better and better at chess, and it correctly deduces that this requires more brainpower. As a result, it starts breaking down the earth to provide feedstock for building more processing power, followed by the rest of the solar system. Result: the end of humanity.
There are other, more sophisticated scenarios in the LessWrong arsenal, though all of them seem to rely on awfully stupid versions of intelligence. I find it vanishingly unlikely that any of them would be created by ordinary AI teams, but it's certainly possible that one could be created by a malevolent AI genius. At that point, it comes down to an arms race—just as it already has in the arenas of nuclear bombs, manmade viruses, and chemical weapons. There's nothing new here, folks.
Bottom line: Effective Altruism is a worthy movement even if true believers and lunatic philosophers like William MacAskill piss all over their own creation with ridiculous doctrines like longtermism, which suggests that the most effective altruism should value the trillions of humans living in the future far more than the mere billions of us who are currently living. This is both analytically barren and empirically impoverished. Can you imagine teaching this doctrine to a race that has trouble passing the marshmallow test? The mind reels.
Anyway, both historically and practically you should treat Effective Altruism and longtermism as separate things even if there's a bit of overlap in their adherents. The former is good stuff if you take it in bite-sized chunks. The latter is basically crackpotism.