Skip to content

Do you care about the Dutch elections? No? Fine. I'll keep this short.

All the news is about Geert Wilders, the anti-Muslim right-wing nationalist who gained 20 seats in this week's elections. This is a big deal, no question. It was driven partly by immigration fears and partly by the fact that the current centrist coalition has been in power for a dozen years.

Still, even after big gains, the Dutch right wing parties as a whole ended up with only about 79 votes.¹ If every one of them entered a coalition, that would be enough for a majority of the 150-seat parliament. But the second-biggest party on the right, the CDA, won 20 seats and has firmly ruled out cooperation with Wilders. At most, then, he could put together a coalition of about 59 seats.

Except that the old ruling party on the center-something, the VVD, has hinted that it might cooperate with Wilders as long as he's not prime minister. That would put his party back into contention.

Alternatively, the center and leftish parties plus CDA could put together a majority and shut out Wilders completely. So it's possible that nothing will change at all. We'll probably know in a year or so, given how long it takes the Dutch to put together coalitions these days.

¹"About" because the Netherlands has a lot of parties and some of them are tricky to categorize.

Why did the board of OpenAI—apparently out of the blue—fire CEO Sam Altman last week? No one has yet provided a definitive account of what happened, but the leading guess is that it was related to the possible development of super-intelligent AI. The board felt that Altman was barreling ahead toward this goal without giving sufficient thought to safety issues and was unwilling to accept their calls to slow things down. Eventually, they felt they had no option left but to get rid of him.

Maybe. But was OpenAI really anywhere near the creation of super AI? A Reuters dispatch says yes:

[Two sources say that] several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity.

....According to one of the sources, long-time executive Mira Murati mentioned the project, called Q*, to employees on Wednesday and said that a letter was sent to the board prior to this weekend's events.... Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

This doesn't sound like a civilization-ending breakthrough, but I guess at least a few people thought it might be.

That might be hard to understand unless you're familiar with the cultish Silicon Valley fear that AI could eventually destroy us all. This fear mostly centers on the possibility of "misalignment," that is, a super AI that isn't aligned with human goals. The problem—or so the story goes—is that such an AI could develop goals of its own and feverishly set about implementing them, destroying everything in its path.

I've never thought this was very plausible because it presupposes human-like emotions: goal seeking, obsession, dominance, and so forth. But those emotions are the result of millions of years of evolution. There's no reason to think that an AI will develop them in the absence of evolution.

It also presupposes that this supposedly super intelligent AI is pretty dumb. Surely something that's super intelligent would have the sense to recognize reasonable limits on its goals? And would also recognize how competing goals affect each other.

The weirdest part of this is that there's no need for such outré fears. The real problem with a super intelligent AI is that it might be perfectly aligned with human goals—but it would be the wrong human. Would you like a bioweapon that can destroy humanity? Yes sir, Mr. Terrorist, here's the recipe. There are at least dozens of wildly dangerous scenarios that are based on the simple—and plausible—notion that bad actors with super AI at their command are the real problem.

In any case, there will also be lots of competing AIs, not just one. So if one terrorist can create a deadly virus, the good guys can presumably create a cure. This is the truly likely future: humans acting the way humans have always acted, but with super AIs on all sides helping them out. But we'll probably survive. It takes a lot to literally kill everyone on earth.

UPDATE: Some Guy on Twitter™ has this to say:

I don't know anything about this, but it seemed plausible enough to pass along.

National Review points me today to a new study that examines birth rates before and after the Supreme Court's Dobbs decision. The study uses a "pre-registered synthetic difference-in-differences design applied to newly released provisional natality data for the first half of 2023." Here's the result:

The authors note that up through 2022 states which now have abortion bans had very similar birth rates to states that don't have bans. But in 2023 the relative rate spiked up by 2.3%.

The whole "pre-registered synthetic difference-in-differences design" sounds far too complicated for me. So instead I just looked at births over the past three years:

I dunno. If states with abortion bans had higher fertility rates while other states stayed the same, you'd expect higher overall fertility. Instead it's lower. So it's unclear what effect Dobbs actually had overall.

But I'll say a couple of things. First, I don't really need a study to believe that births went up in states that banned abortion. It would be fairly stunning if that didn't happen, at least a little bit. Nevertheless, the evidence suggests that the total number of abortions didn't change after Dobbs.

Second, this is all meaningless anyway. There are far too few data points in this noisy series to draw any conclusions. The study compares only six months of 2023 compared to the first six months of other years. That's nowhere near enough to draw any conclusions. What's more, we're coming off a pandemic, and who knows how that affected differential birth rates at the state level? Any comparison of 2023 with 2022 is all but impossible.

I went up to Palomar Mountain last night to do a bit of astrophotography testing, but when I got there I encountered gale force winds even though the weather report—as I was standing there watching the trees bend—told me the wind was blowing at 7 mph. Stupid weather forecasters.

Plus the moon was up, which made things worse. In any case, my deep sky pictures were a mess. The wind prevented the guider from working, so I mostly got lots of blobby and streaky photos.

However, I did get a nice picture with my regular camera, so that's what you get instead. The lights in the distance are from Oceanside.

November 22, 2023 — Palomar Mountain, California

Tom Edsall interviews a bunch of mental health folks today about Donald Trump's recent behavior. Here's one of them:

“Trump is an aging malignant narcissist,” Aaron L. Pincus, a professor of psychology at Penn State, wrote in an email. “As he ages, he appears to be losing impulse control and is slipping cognitively. So we are seeing a more unfiltered version of his pathology. Quite dangerous.”

In addition, Pincus continued, “Trump seems increasingly paranoid, which can also be a reflection of his aging brain and mental decline.

This seems about right. Hardly anyone seems willing to say it, though, even though it's become more and more obvious over the past year.

I just read yet another review of Ridley Scott's Napoleon, and it made me wonder yet again about biopics. They are always "based on," which is a nice way of saying that they routinely lie about whoever's life they're selling. I gather that Napoleon is especially egregious on this front.

But why? Popular biographies in book form don't do this and are still big sellers. Why do movies have to do it?

I'm not talking about the need to create dialog where no record exists. As long as it tries to stay faithful to what's known, that's fine. I'm not even talking about compressing real events. A two-hour movie has limits on how long a scene can run.

But what's the point of putting people where they never were? Or having things happen at the wrong place and time? Or deliberately inventing dialog that was never even remotely said? Or making supporting characters into people they never were? Or inventing motivations that never existed?

Is it really impossible to make an entertaining biopic that's 99% faithful to the truth? Maybe it is. It's not like I've ever tried. But I still wonder.

Why does so much OpenAI drama happen late at night?

I wish I had something insightful to say about this, but all I can do is shake my head. Larry Summers?

Last week Media Matters published a piece showing advertisements on Twitter being served up next to antisemitic posts. Elon Musk said the claim was wrong and filed a lawsuit.

As far as I'm concerned, this is what SLAPP laws are for, and they don't get used nearly enough. Musk's suit is nothing more than an attempt by a guy with bottomless riches to bankrupt someone he doesn't like.¹

But at least Musk and Twitter are private actors who have no special responsibility to the public. That can't be said for the attorney general of Texas, an elected officer. Nonetheless, he decided to stick his oar in:

Paxton hasn't actually done anything yet, and he probably won't. But this is still an egregious abuse of power by a public official. Every conservative organization concerned about free speech should be denouncing this. Instead, crickets.

¹Also stupid, since it will keep the whole thing in the public eye and probably alienate even more advertisers.

The more time that goes by, the more sorry I am about the failure of the 2000 Camp David Summit. It was the last chance for even a minimal chance at Middle Eastern peace and a Palestinian state. Instead we got another intifada on one side and a reinvigorated dismantling of the West Bank on the other.

It's a damn shame, and everything happening today is a result of it. That is all.