Skip to content

AI is still not here. But it’s getting damn close.

I don't subscribe to Bloomberg so I can't read all of Tyler Cowen's column this week about artificial intelligence. But here's an excerpt from his blog. The topic is ChatGPT, an example of a Large Language Model:

I’ve started dividing the people I know into three camps: those who are not yet aware of LLMs; those who complain about their current LLMs; and those who have some inkling of the startling future before us. The intriguing thing about LLMs is that they do not follow smooth, continuous rules of development. Rather they are like a larva due to sprout into a butterfly.

I don't agree with Tyler about everything, but I sure do about this. And I suspect there are going to be some more converts when v4.0 of ChatGPT is released.

I always get lots of pushback when I write about how AI is on a swift upward path. The most sophisticated criticism focuses on the underlying technology: Moore's Law is dead. Deep learning has scalability issues.  Machine learning in general has fundamental limits. LLMs merely mimic human speech using correlations and pattern matching. Etc.

But who cares? Even if I stipulate that all of this is true, it just means that AI researchers are inventing new kinds of models constantly when the older ones hit a wall. How else did you suppose that advances in AI would happen?

In the case of ChatGPT I reject the criticisms anyway. It's not yet as good as college-level speech and perception, but neither was the Model T as good as a Corvette. It's going to get better very quickly. And the criticism that it "mimics" human speech without true understanding is laughable. That's what most humans do too. And in any case, who cares if it has "true" understanding or consciousness? If it starts cranking out sonnets better than Shakespeare's or designing better moon rockets than NASA, then it's as useful as a human being regardless of what's going on inside. Consciousness is overrated anyway.

The really interesting thing about LLMs is what they say about which jobs are going to be on the AI chopping block first. Most people have generally thought that AI would take low-level jobs first, and then, as it got smarter, would start taking away higher-income jobs.

But that may not be the case. One of the hardest things for AI to do, for example, is to interact with the real world and move around in it. This means that AI is more likely to become a great lawyer than a great police officer, for example. In fact, I wouldn't be surprised if it takes only a few years for AI to put lawyers almost entirely out of business unless they're among the 10% (or so) of courtroom lawyers. And even that 10% will go away shortly afterward since their interaction with the real world is fairly constrained and formalized.

Driverless cars, in contrast, are hard because the controlling software has to deal with a vast and complicated slice of the real world. And even so they're making good progress if you can rein in your contempt for their (obvious and expected) limitations at this point in their development. AI will have similar difficulties with ditch digging, short order cooking, plumbing, primary education, etc.

It will have much less difficulty with jobs that require a lot of knowledge but allow it to interact mostly with the digital world. This includes law, diagnostic medicine, university teaching, writing of all kinds, and so forth.

The bottom line, whether you personally choose to believe it or not, is that AI remains on an exponential growth path—and that's true of both hardware and software. In ten or fifteen years its capabilities will be nearly a thousand times greater than they are today. Considering where we are now, that should either scare the hell out of you or strike you with awe at how human existence is about to change.

70 thoughts on “AI is still not here. But it’s getting damn close.

  1. Dana Decker

    KD: "I wouldn't be surprised if it takes only a few years for AI to put lawyers almost entirely out of business unless they're among the 10% (or so) of courtroom lawyers."

    Probably so. That's because lawyers are mostly using their *memories* to find the right precedent to cite, not beginning at First Principles and clearly written law, constructing a logical argument making their case. Aren't you amazed that the recent Trump-related rulings and pleadings are chock full of precedent? By the plaintiff, defendant, and judge. Why is that? Apparently because laws are not written precisely and expansively, so how it's applied is up to - get this - court cases hither and yon. Trump's lawyers cite cases that are of no relevance, but because precedent is much of the game, enormous time is wasted weeding them out of the process. And even weeding out is subjective, so a final ruling sits on a massive pile of Jell-O.

    1. rrhersh

      The stuff you describe is how only a small fraction of practicing lawyers spend their day. True story: I have a friend who is a criminal defense lawyer. She gets a call one day about a guy who was in jail for drunk driving. Her first task is to get him out of jail. But he was from a wealthy family. A different faction of the family rejected the idea of his being represented by a lawyer like her, and so they called her off and hired an old, respected white-shoe firm. That type of lawyer, it turns out, doesn't know shit about how to get a guy out of jail. They quickly resorted to dropping names of judges, with the predictable response of being stonewalled by the cops. After about two weeks, the family brings my friend back into the case. She makes a few phone calls to people she has established personal relationships with, and the guy is back on the street in a couple of hours. I don't think she has to worry about AI taking her job.

          1. cld

            Were they cops?

            And, if the white shoe lawyer only got crosswise with them by offending them, that doesn't sound like something an AI would try.

            1. different_name

              I can't even tell what you're trying to argue, but whatever it is, it is based on what you guess a hypothetical future AI might do.

              Seems legit.

              1. cld

                Well, I can't tell what you're trying to argue, either.

                The point of releasing a prisoner from jail is almost entirely clerical, exactly the function a working AI may be best suited to fill, so how is the lawyer supposed to call that up and schmooze somebody out of the slammer?

Comments are closed.