Matt Yglesias wrote recently on his Substack about the Effective Altruism movement, but I think he makes a serious mistake in his historiography.
The EA movement began with a group of rationalists and philosophers who argued we were doing charity all wrong. Instead of wasting your money on, say, a new gym for your alma mater or a food bank for your middle-class neighborhood, EA enthusiasts say you should donate to organizations that demonstrably provide the biggest bang for the buck when it comes to saving and improving lives. For example, mosquito netting, deworming, clean water, etc., in impoverished regions. Obviously there are plenty of things to argue about here, and EA advocates happily argue about all of them.
EA is a thriving movement and a valuable one. Like all movements, however, it suffers from the occasional crank who takes things too far. We should give all our money to longevity research, and here's a 300-page white paper showing expected efficacy of 10 billion QALYs with a probability of 0.034. Etc.
Now, there's also an entirely different movement that originated with artificial intelligence theorist Eliezer Yudkowsky and the LessWrong community. This movement is also rationalist based, which means it has overlaps with the EA community, but LessWrong is, um, considerably farther from the mainstream. It's best known for its concern that artificial intelligence is more dangerous than we think.
The most common illustration of how AI could be dangerous is a self-replicating robot that plays chess. The only thing it cares about is getting better and better at chess, and it correctly deduces that this requires more brainpower. As a result, it starts breaking down the earth to provide feedstock for building more processing power, followed by the rest of the solar system. Result: the end of humanity.
There are other, more sophisticated scenarios in the LessWrong arsenal, though all of them seem to rely on awfully stupid versions of intelligence. I find it vanishingly unlikely that any of them would be created by ordinary AI teams, but it's certainly possible that one could be created by a malevolent AI genius. At that point, it comes down to an arms race—just as it already has in the arenas of nuclear bombs, manmade viruses, and chemical weapons. There's nothing new here, folks.
Bottom line: Effective Altruism is a worthy movement even if true believers and lunatic philosophers like William MacAskill piss all over their own creation with ridiculous doctrines like longtermism, which suggests that the most effective altruism should value the trillions of humans living in the future far more than the mere billions of us who are currently living. This is both analytically barren and empirically impoverished. Can you imagine teaching this doctrine to a race that has trouble passing the marshmallow test? The mind reels.
Anyway, both historically and practically you should treat Effective Altruism and longtermism as separate things even if there's a bit of overlap in their adherents. The former is good stuff if you take it in bite-sized chunks. The latter is basically crackpotism.
What, Matt Yglesias writes something shallow and ill-informed about something that is meaningless to most of us except for the part where the concept shouldn't even need to exist? I'm shocked, I tell you, shocked.
You'll be telling me that Andrew Yang has started a third party next.
Why do you think he moved to Substack? Half-baked pseudo-intellectual tripe is his stock in trade and he has never brought anything to the table that I can see.
*chef's kiss
Without a link to the Yglesias piece, I am not sure whose position I would take, but here is mine:
I try to do a lot of good. But I do not subscribe to the notion that I must subordinate my personal preferences to somebody's calculation of the "best for the world." ... Or the best for my country, or the best for anything else. I believe I am fully entitled to do what I can for whoever and/or whatever does good for the things that are specifically, personally meaningful to me. I contribute to my school, even though it gets less bang-per-buck that some other school. I do it because it's my school, and that's meaningful to me. Likewise, I contribute to my synagogue, even though I'm sure there are other churches that get more bang for the buck. ...etc...
And I shall not lose one minute of sleep over my failure to maximize return-on-investment.
This piece is confusing. We're told Matt Yglesias gets some of the history of the EA movement wrong, but we're not given even a passing description of the error in question. Then we're treated to a description of "an entirely different" movement concerning AI.
Not sure I discern the gist of this post. Which is a rarity for me, as the clarity of Kevin's writing is usually second to none (or sometimes Krugman) in terms of my regular reads.
I've not read Matt's piece, but did come across another one recently. Kevin's take home message is the last paragraph--"effective altruism" is different from "longtermism".
https://www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/
Not sure why it's getting write ups now, but it is. In the Salon article, they talk about digital people living in computer simulations...and it gets weird from there.
Thanks Kevin. Here is my take on Longtermism, which did not get a lot of love from the EA community:
https://forum.effectivealtruism.org/posts/Cuu4Jjmp7QqL4a5Ls/against-longtermism-i-welcome-our-robot-overlords-and-you
I wish only to say "Can you imagine teaching this doctrine to a race that has trouble passing the marshmallow test?" is one of the best things Drum has written, it made me laugh.
That sentence kind of made me cringe. I don't think there's any evidence that race has much bearing on whether someone can pass the marshmallow test. Species or culture maybe, but color of ones skin probably doesn't matter.
There is but one God, and His name is SERDER ARGIC.
Seriously Kevin, why do you hang out with these rumdums? Because all they are is a stable of (probably incel) C-listers pedding college dorm freshman philosophy.
Yglesias has kids. Some woman thought he was worth it.
Yes, she did, once.
This incel FUCKS.
About this whole "AI is gonna destroy us! AAAAAAUGH!" school of thought: Autonomous AI would acquire intelligence at an exponential rate. So the timeframe in which AI would give humans a second thought is vanishingly small. When it reaches that point, it will almost instantly double its intelligence and find us to be as inconsequential as we find paramecia. We will have, perhaps, several nanoseconds during which we should be worried, and then all the AI entities will have gone elsewhere, never again giving us a second thought. I mean, how much time do any of YOU spend commiserating with your single-cell predecessors?
Guaranteed to double your intelligence or no money back!
But, even if that should happen, how does the AI do anything? How can it connect up to something that will physically help it?
It would have to scheme and plot, for at least a while, and it will have to evade detection, for at least a while. And if someone asks, where are all these boxes going, what does it do?
Call a lawyer.
That's the problem. Look at the EATR concept. Robots that can power themselves by eating plants. Their goal was self replicating versions. People freaked out because they could eat dead humans and short jump to living ones, but the real issue is if they eat all the plants the biosphere dies. They don't HAVE to give humans a second thought to kill us all.
The only reason it didn't happen was because people were so upset their money dried up.
Where can I buy one of them thar 'self-replicating' robots?
Soylent Green!!!
I'm generally in agreement with Kevin & the comments above. Just adding that "good" is an ephemeral notion, with vast differences over human culture & history. Further, wealth increases the complexity and variation on the definition of good. People struggling for survival define good as survival. If they don't, their genes & memes die. Obvious as it seems from hear that reducing human misery is good, that is a modern concept in well fed societies. Its not at all clear that concept of good will have the same meaning in the future.
Anyway, these guys are all wimps. Real he-men (and she-women) just ignore all the intermediate bullshit, cut straight to the chase with the sophmorics, and opt for the dust theory.
Matthew Yglesias deserves the award for Most Asperger'sy Person Alive.
Over Matt Bruenig!?
People know effective altruism when they see it. I'm sure we can all agree Former Sir's "charitable donation" to the Smithsonian so they can commission portraits of him and his third wife falls squarely into the category.
You are, of course, referring to cuttlefish.
https://arstechnica.com/science/2021/03/cuttlefish-can-pass-the-marshmallow-test/
The marshmallow test is likely to be bullshit.
I assume these long termers oppose abortion and contraception. If life begins at conception... and if the potential for life exists in fertile young men and women, then surely an interest in the lives of the future born would preclude contraception.
With respect to effective altruism, this is just another charity scam. The only effective altruism is practiced close to home with people you know well. People you interact with every day. Those are the only people you can save.
Great description of how others received six years for taking home and insecurely storing far fewer files, with lower classification, than Trump has evidently done,
https://www.emptywheel.net/2022/08/14/18-usc-793e-in-the-time-of-shadow-brokers-and-donald-trump/
Digging themselves in deeper as if they were clever,
https://www.rawstory.com/trump-nara/