I don't subscribe to Bloomberg so I can't read all of Tyler Cowen's column this week about artificial intelligence. But here's an excerpt from his blog. The topic is ChatGPT, an example of a Large Language Model:
I’ve started dividing the people I know into three camps: those who are not yet aware of LLMs; those who complain about their current LLMs; and those who have some inkling of the startling future before us. The intriguing thing about LLMs is that they do not follow smooth, continuous rules of development. Rather they are like a larva due to sprout into a butterfly.
I don't agree with Tyler about everything, but I sure do about this. And I suspect there are going to be some more converts when v4.0 of ChatGPT is released.
I always get lots of pushback when I write about how AI is on a swift upward path. The most sophisticated criticism focuses on the underlying technology: Moore's Law is dead. Deep learning has scalability issues. Machine learning in general has fundamental limits. LLMs merely mimic human speech using correlations and pattern matching. Etc.
But who cares? Even if I stipulate that all of this is true, it just means that AI researchers are inventing new kinds of models constantly when the older ones hit a wall. How else did you suppose that advances in AI would happen?
In the case of ChatGPT I reject the criticisms anyway. It's not yet as good as college-level speech and perception, but neither was the Model T as good as a Corvette. It's going to get better very quickly. And the criticism that it "mimics" human speech without true understanding is laughable. That's what most humans do too. And in any case, who cares if it has "true" understanding or consciousness? If it starts cranking out sonnets better than Shakespeare's or designing better moon rockets than NASA, then it's as useful as a human being regardless of what's going on inside. Consciousness is overrated anyway.
The really interesting thing about LLMs is what they say about which jobs are going to be on the AI chopping block first. Most people have generally thought that AI would take low-level jobs first, and then, as it got smarter, would start taking away higher-income jobs.
But that may not be the case. One of the hardest things for AI to do, for example, is to interact with the real world and move around in it. This means that AI is more likely to become a great lawyer than a great police officer, for example. In fact, I wouldn't be surprised if it takes only a few years for AI to put lawyers almost entirely out of business unless they're among the 10% (or so) of courtroom lawyers. And even that 10% will go away shortly afterward since their interaction with the real world is fairly constrained and formalized.
Driverless cars, in contrast, are hard because the controlling software has to deal with a vast and complicated slice of the real world. And even so they're making good progress if you can rein in your contempt for their (obvious and expected) limitations at this point in their development. AI will have similar difficulties with ditch digging, short order cooking, plumbing, primary education, etc.
It will have much less difficulty with jobs that require a lot of knowledge but allow it to interact mostly with the digital world. This includes law, diagnostic medicine, university teaching, writing of all kinds, and so forth.
The bottom line, whether you personally choose to believe it or not, is that AI remains on an exponential growth path—and that's true of both hardware and software. In ten or fifteen years its capabilities will be nearly a thousand times greater than they are today. Considering where we are now, that should either scare the hell out of you or strike you with awe at how human existence is about to change.