In other words, the answer is yes to both questions. People are concerned about their actual conditions and social media is making it worse.
But that's wrong! The answer is no to both questions. The evidence is overwhelming that actual conditions are pretty good and that, generally speaking, people know it.
But— they are being overwhelmed by reports telling them how bad things are. And while it's fashionable to blame everything like this on social media these days, Facebook isn't at fault here. Fox News is, along with the rest of the conservative noise machine. You can't get through a day of Fox News without hearing at least a dozen times that Joe Biden has wrecked the economy and produced ruinous inflation.
Why does it take so long to hammer this into people's heads? We know people aren't very upset about their personal situations because we've asked them. And we know that Fox News and the rest of the gang are infinitely more influential than Twitter or TikTok, especially among anyone over the age of 20. Come on, folks.
It's the Holy Grail! The price of a Big Mac isn't just rising more slowly, it's actually gone down.
In Bucks County, Pennsylvania, a hard-right school board filled with Moms for Liberty members was swept out of office in November. But at the last minute they decided in their final board meeting to show their appreciation for the school superintendent who had backed their agenda:
A Pennsylvania school board that banned books, Pride flags and transgender athletes slipped a last-minute item into their final meeting before leaving office, hastily awarding a $700,000 exit package to the superintendent who supported their agenda.
But wait! That's not all:
The package also includes a puzzling ban on any district investigations of his tenure and an agreement that he can keep his district-issued laptop as long as he wipes it of school records. U.S. District Judge Timothy Savage nixed that last provision on Friday when he ordered Lucabaugh, a defendant in middle school teacher Andrew Burgess’s retaliation suit against the district, to preserve documents that may become evidence in the case.
This sure seems like a huge red siren that says "Investigate me!" It's literally an order to destroy evidence. If I were a nearby US Attorney I'd be on the next train to Doylestown.
Do you care about the Dutch elections? No? Fine. I'll keep this short.
All the news is about Geert Wilders, the anti-Muslim right-wing nationalist who gained 20 seats in this week's elections. This is a big deal, no question. It was driven partly by immigration fears and partly by the fact that the current centrist coalition has been in power for a dozen years.
Still, even after big gains, the Dutch right wing parties as a whole ended up with only about 79 votes.¹ If every one of them entered a coalition, that would be enough for a majority of the 150-seat parliament. But the second-biggest party on the right, the CDA, won 20 seats and has firmly ruled out cooperation with Wilders. At most, then, he could put together a coalition of about 59 seats.
Except that the old ruling party on the center-something, the VVD, has hinted that it might cooperate with Wilders as long as he's not prime minister. That would put his party back into contention.
Alternatively, the center and leftish parties plus CDA could put together a majority and shut out Wilders completely. So it's possible that nothing will change at all. We'll probably know in a year or so, given how long it takes the Dutch to put together coalitions these days.
¹"About" because the Netherlands has a lot of parties and some of them are tricky to categorize.
Why did the board of OpenAI—apparently out of the blue—fire CEO Sam Altman last week? No one has yet provided a definitive account of what happened, but the leading guess is that it was related to the possible development of super-intelligent AI. The board felt that Altman was barreling ahead toward this goal without giving sufficient thought to safety issues and was unwilling to accept their calls to slow things down. Eventually, they felt they had no option left but to get rid of him.
[Two sources say that] several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity.
....According to one of the sources, long-time executive Mira Murati mentioned the project, called Q*, to employees on Wednesday and said that a letter was sent to the board prior to this weekend's events.... Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
This doesn't sound like a civilization-ending breakthrough, but I guess at least a few people thought it might be.
That might be hard to understand unless you're familiar with the cultish Silicon Valley fear that AI could eventually destroy us all. This fear mostly centers on the possibility of "misalignment," that is, a super AI that isn't aligned with human goals. The problem—or so the story goes—is that such an AI could develop goals of its own and feverishly set about implementing them, destroying everything in its path.
I've never thought this was very plausible because it presupposes human-like emotions: goal seeking, obsession, dominance, and so forth. But those emotions are the result of millions of years of evolution. There's no reason to think that an AI will develop them in the absence of evolution.
It also presupposes that this supposedly super intelligent AI is pretty dumb. Surely something that's super intelligent would have the sense to recognize reasonable limits on its goals? And would also recognize how competing goals affect each other.
The weirdest part of this is that there's no need for such outré fears. The real problem with a super intelligent AI is that it might be perfectly aligned with human goals—but it would be the wrong human. Would you like a bioweapon that can destroy humanity? Yes sir, Mr. Terrorist, here's the recipe. There are at least dozens of wildly dangerous scenarios that are based on the simple—and plausible—notion that bad actors with super AI at their command are the real problem.
In any case, there will also be lots of competing AIs, not just one. So if one terrorist can create a deadly virus, the good guys can presumably create a cure. This is the truly likely future: humans acting the way humans have always acted, but with super AIs on all sides helping them out. But we'll probably survive. It takes a lot to literally kill everyone on earth.
UPDATE: Some Guy on Twitter™ has this to say:
With Q*, OpenAI have likely solved planning/agentic behavior for small models
Scale this up to a very large model and you can start planning for increasingly abstract goals
National Review points me today to a new study that examines birth rates before and after the Supreme Court's Dobbs decision. The study uses a "pre-registered synthetic difference-in-differences design applied to newly released provisional natality data for the first half of 2023." Here's the result:
The authors note that up through 2022 states which now have abortion bans had very similar birth rates to states that don't have bans. But in 2023 the relative rate spiked up by 2.3%.
The whole "pre-registered synthetic difference-in-differences design" sounds far too complicated for me. So instead I just looked at births over the past three years:
I dunno. If states with abortion bans had higher fertility rates while other states stayed the same, you'd expect higher overall fertility. Instead it's lower. So it's unclear what effect Dobbs actually had overall.
But I'll say a couple of things. First, I don't really need a study to believe that births went up in states that banned abortion. It would be fairly stunning if that didn't happen, at least a little bit. Nevertheless, the evidence suggests that the total number of abortions didn't change after Dobbs.
Second, this is all meaningless anyway. There are far too few data points in this noisy series to draw any conclusions. The study compares only six months of 2023 compared to the first six months of other years. That's nowhere near enough to draw any conclusions. What's more, we're coming off a pandemic, and who knows how that affected differential birth rates at the state level? Any comparison of 2023 with 2022 is all but impossible.
I went up to Palomar Mountain last night to do a bit of astrophotography testing, but when I got there I encountered gale force winds even though the weather report—as I was standing there watching the trees bend—told me the wind was blowing at 7 mph. Stupid weather forecasters.
Plus the moon was up, which made things worse. In any case, my deep sky pictures were a mess. The wind prevented the guider from working, so I mostly got lots of blobby and streaky photos.
However, I did get a nice picture with my regular camera, so that's what you get instead. The lights in the distance are from Oceanside.
Tom Edsall interviews a bunch of mental health folks today about Donald Trump's recent behavior. Here's one of them:
“Trump is an aging malignant narcissist,” Aaron L. Pincus, a professor of psychology at Penn State, wrote in an email. “As he ages, he appears to be losing impulse control and is slipping cognitively. So we are seeing a more unfiltered version of his pathology. Quite dangerous.”
In addition, Pincus continued, “Trump seems increasingly paranoid, which can also be a reflection of his aging brain and mental decline.”
This seems about right. Hardly anyone seems willing to say it, though, even though it's become more and more obvious over the past year.
I just read yet another review of Ridley Scott's Napoleon, and it made me wonder yet again about biopics. They are always "based on," which is a nice way of saying that they routinely lie about whoever's life they're selling. I gather that Napoleon is especially egregious on this front.
But why? Popular biographies in book form don't do this and are still big sellers. Why do movies have to do it?
I'm not talking about the need to create dialog where no record exists. As long as it tries to stay faithful to what's known, that's fine. I'm not even talking about compressing real events. A two-hour movie has limits on how long a scene can run.
But what's the point of putting people where they never were? Or having things happen at the wrong place and time? Or deliberately inventing dialog that was never even remotely said? Or making supporting characters into people they never were? Or inventing motivations that never existed?
Is it really impossible to make an entertaining biopic that's 99% faithful to the truth? Maybe it is. It's not like I've ever tried. But I still wonder.
Why does so much OpenAI drama happen late at night?
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
We are collaborating to figure out the details. Thank you so much for your patience through this.