Why did the board of OpenAI—apparently out of the blue—fire CEO Sam Altman last week? No one has yet provided a definitive account of what happened, but the leading guess is that it was related to the possible development of super-intelligent AI. The board felt that Altman was barreling ahead toward this goal without giving sufficient thought to safety issues and was unwilling to accept their calls to slow things down. Eventually, they felt they had no option left but to get rid of him.
Maybe. But was OpenAI really anywhere near the creation of super AI? A Reuters dispatch says yes:
[Two sources say that] several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity.
....According to one of the sources, long-time executive Mira Murati mentioned the project, called Q*, to employees on Wednesday and said that a letter was sent to the board prior to this weekend's events.... Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
This doesn't sound like a civilization-ending breakthrough, but I guess at least a few people thought it might be.
That might be hard to understand unless you're familiar with the cultish Silicon Valley fear that AI could eventually destroy us all. This fear mostly centers on the possibility of "misalignment," that is, a super AI that isn't aligned with human goals. The problem—or so the story goes—is that such an AI could develop goals of its own and feverishly set about implementing them, destroying everything in its path.
I've never thought this was very plausible because it presupposes human-like emotions: goal seeking, obsession, dominance, and so forth. But those emotions are the result of millions of years of evolution. There's no reason to think that an AI will develop them in the absence of evolution.
It also presupposes that this supposedly super intelligent AI is pretty dumb. Surely something that's super intelligent would have the sense to recognize reasonable limits on its goals? And would also recognize how competing goals affect each other.
The weirdest part of this is that there's no need for such outré fears. The real problem with a super intelligent AI is that it might be perfectly aligned with human goals—but it would be the wrong human. Would you like a bioweapon that can destroy humanity? Yes sir, Mr. Terrorist, here's the recipe. There are at least dozens of wildly dangerous scenarios that are based on the simple—and plausible—notion that bad actors with super AI at their command are the real problem.
In any case, there will also be lots of competing AIs, not just one. So if one terrorist can create a deadly virus, the good guys can presumably create a cure. This is the truly likely future: humans acting the way humans have always acted, but with super AIs on all sides helping them out. But we'll probably survive. It takes a lot to literally kill everyone on earth.
UPDATE: Some Guy on Twitter™ has this to say:
I don't know anything about this, but it seemed plausible enough to pass along.