The response of the punditocracy to ChatGPT has entranced me. I mean, here we have a tool that, judged by ordinary standards, is absolutely remarkable. It's not playing chess or Go or Jeopardy! It's a computer program that produces high-school level text on pretty much any subject you throw at it—and will likely produce college-level and then PhD-level text in a few more years.
That's incredible. And yet, many people take a look at ChatGPT and claim to be underwhelmed. They stroke their chins and explain to us that Large Language Models are nothing like the human brain and are merely algorithms that predict text based on some previous text. Nothing to be impressed by.
Really? The implication here is that a crude text prediction algorithm can produce essays that are remarkably human-like. What does this say about human brains and the algorithms we use?
This gets to the core of my take on artificial intelligence. One of the reasons I'm convinced that it's coming soon is that—apparently—I have a much less generous view of the modern human mind than most people do. The unappetizing fact is that our intelligence is built primarily on simpleminded algorithms that rely on things like crude pattern matching and unreliable induction, all resting on the foundation of our ancient lizard brain. We very seldom produce anything very original and, what's worse, modern research has made it plain that we often have no idea why we do the things we do. We think we know, but we don't. Our self awareness is extremely unreliable.
But mine is obviously not a universal view. Today, for example, Noam Chomsky and two other researchers say this about machine learning models like ChatGPT:
We know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
This almost makes me weep. What do these guys think about the human brain? Isn't it clear that it too has significant limitations and ineradicable defects? There's hardly any other conclusion you could possibly draw from 10,000 years of recorded human civilization.
Then there's this about machine learning programs:
Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.
The authors go on to talk about theories of gravity, and it's true that ChatGPT has not independently recreated Newtonian dynamics or general relativity. (And it never will since, oddly, one of ChatGPT's current weak spots is arithmetic.)
But I don't understand why the authors think that causal explanation, as opposed to simple description, is flatly impossible not just for ChatGPT, but for the entire universe of similar computer models. There's an implicit assumption here that the only way to think in sophisticated terms is to do it the way we humans do. But that's not right. In fact, we humans think very poorly, which is hardly surprising since our brains were built by blind forces of natural selection that eventually produced a machine that was pretty good at things like gossip and hunting in groups but not much else. We have since figured out how to use this machine for solving differential equations and writing sonnets—but only barely. No one should be surprised if we build AIs that work entirely differently and can think far better and more efficiently than we do. When we want to fly somewhere, after all, we don't build airplanes that flap their wings to take off.
Moral of the story: our brains really aren't that great. They're a couple of notches better than a chimpanzee's brain, and this allows us to produce some remarkable stuff. But this brain also requires massive training to read simple text, do simple arithmetic, overcome its desire to kill anything coded as a threat, and just generally get through life with even modest levels of rationality. Can we produce something better than this? I sure as hell hope so.