Skip to content

I got some pushback on Twitter about my comment yesterday that ChatGPT would replace most lawyers in a few years time. My reply: "I wonder how long it will be until a computer is able to pass the bar exam?"

Today I sort of get an answer:

How well can AI models write law school exams without human assistance? To find out, we used the widely publicized AI model ChatGPT to generate answers on four real exams at the University of Minnesota Law School. We then blindly graded these exams as part of our regular grading processes for each class. Over 95 multiple choice questions and 12 essay questions, ChatGPT performed on average at the level of a C+ student, achieving a low but passing grade in all four courses.

So ChatGPT is performing at a C+ level on graduate level work. In a year that will be a B+ and in another year an A+. And remember, ChatGPT responds to criticism, so it can make changes based on conversation with a supervising lawyer.

Anyway, my guess is that GPT v5.0 or v6.0 (we're currently at v3.5) will be able to take over the business of writing briefs and so forth with only minimal supervision. After that, it only takes one firm to figure out that all the partners can get even richer if they turn over most of the work of associates to a computer. Soon everyone will follow. Then the price of legal advice will plummet, too, at all but the very highest levels.

I might be totally wrong about this. But for some reason lots of people assume that software like ChatGPT will get better at regurgitating facts but will never demonstrate "real judgment." This is dangerously wrong, partly because we humans narcissistically assume that our own judgment is well nigh irreplaceable. Anyone in law school today should think long and hard about this.

Elon Musk took out $13 billion in bridge loans when he bought Twitter—and bridge loans are just that: a short-term bridge until you get permanent financing. Bridge loans are also expensive, so Musk is highly motivated to replace his with lower-cost financing. The Wall Street Journal says he's trying hard to do that:

The state of the fundraising talks couldn’t be learned. In mid-December, Mr. Musk’s team reached out to new and existing backers about raising new equity capital at the original Twitter takeover price. Mr. Musk’s advisers had hoped to reach a deal to raise cash at the initial takeover price by the end of 2022.

Say what? Musk and his bankers were trying to sell shares of Twitter at the $54 price Musk paid for them? Even though they were valued at less than 40 bucks on the public market after Musk announced his deal? Who would be insane enough to overpay as much as Musk did? And why would Musk think anyone would do it?

Musk needs to get his ass back into the real world, and he needs to do it fast.

Today was GDP day, and it turns out that GDP growth in the last quarter of 2022 was just like the wee bear's porridge: not too hot and not too cold:

There are no surprises in the details. Personal expenditures were up, private investment was up (except for houses), and government expenditures were up. Growth was very evenly spread.

With any luck, the Q4 number was high enough to indicate that the economy is still doing OK, but not so high that the Fed wigs out and decides to crush future growth with more interest rate hikes. Cross your fingers.

One of the rotating quotes at the top of the blog is, "Republicans are evil; Democrats are idiots." The Washington Post presents the latest evidence:

President Biden is facing blowback from some members of his own party over his mishandling of sensitive documents as his allies express growing concern that the case could get in the way of Democrats’ momentum coming out of the midterm elections.

I don't think party members need to robotically repeat talking points from their leadership, but the difference here between Republicans and Democrats is astonishing. Donald Trump, who isn't even president any longer, received nothing but hardcore support for both his refusal to admit that he had classified documents in his possession and his persistent unwillingness to cooperate after it turned out he was lying. In fact, far from criticizing him, Republicans and Fox News turned the whole thing around and added it to their bonfire of complaints about the FBI targeting innocent Republicans.

But Joe Biden, who is president, is getting only tepid support from his own party even though his error is minuscule at most. This is an almost 100% repeat of the halfhearted support Hillary Clinton got from her fellow Democrats while Republicans (and the FBI!) were gutting her campaign during the whole email affair.

This happens over and over. When things like this crop up, Democrats seem frozen in fear that someone, somewhere, might eventually find some genuinely damaging evidence and then they'd be—

What? A little embarrassed that they'd supported a guy who turned out to be guilty?

It's so aggravating. I wouldn't even want Democrats to act like Republicans, but could they at least act like a normal party? It's pretty obvious that the Biden document affair is a nothingburger, no matter what Fox News says. Nobody needs to protect themselves against the possibility that it might turn out to be a real scandal. And if, against all odds, it does anyway? There will be plenty of time to load your howitzers then.

In the meantime, please stop being idiots. Please.

You may be familiar with something called the Flynn Effect. It's named after James Flynn, a researcher who discovered that IQs had been rising about three points per decade for most of the 20th century.

The evidence for this has been replicated numerous times and is now accepted as pretty rock solid. I've always had a hard time with it, though, since it suggests that, relative to today, the average IQ of people in the 1920s was about 70. This is roughly a 6th grade level, so it means that the Jazz Age was mostly populated by a bunch of 6th graders.

That doesn't sound right, does it? Then again, I suppose F. Scott Fitzgerald didn't hang around with ordinary people very much, so maybe our view of the '20s is distorted.

In any case, there's now a controversy over whether this IQ increase has continued into the 21st century. I'll spare you all the gruesome details about g-loading and fluid vs. crystallized intelligence and just show you the numbers:

The Flynn Effect, according to some researchers, has slowed down a bit but is still going strong. But a few others, mostly relying on administrative data from conscripts in Scandinavian countries, say that the Flynn Effect slowed down in the 1960s and then reversed. If it's continued along this path, it would mean that Team Flynn estimates an increase of about 14 IQ points since 1960 while Team Reversal estimates a decline of about 3 points.

That's a big difference! And you'd think someone would try to research this. One obvious way is to get a copy of, say, a Stanford-Binet test from 1960 and administer it to a random group of a few hundred people. Would their average score be 100, as it is on a modern IQ test, or would it be around 115, suggesting that we're so smart these days that we score like geniuses on older tests?

This would be neither a difficult nor an expensive study. Why has no one done it?

The Women's Bureau of the Department of Labor issued a report yesterday about the cost of child care. It uses data from their newly launched National Database of Childcare Prices, and I was interested to discover that the most expensive place in the country for child care is California:

The main takeaway from the report was their conclusion that "childcare prices are untenable for families across all care types, age groups, and county population sizes." I suppose I don't doubt that, but I was disappointed that they didn't show the price of child care over time. As you may recall, "compared to what?" is the key question to ask about nearly any sociological claim.

So I did it myself, which turned out to be a far bigger pain in the butt than I anticipated. Here it is for a random collection of big and small counties plus a population-weighted national average:¹

The database only goes back to 2008, so this is all we have. Nationally, the price of preschool went up 7% between 2008-18 after adjustments for inflation, most of which was due to price rises in big population centers. Boston (Suffolk County) was up 18%, for example, and Los Angeles was up 15%. By contrast, smaller counties were generally up by no more more than a few percentage points.

None of this is to say that families don't have a hard time paying for child care. Obviously they do. Both this report and long-term inflation data suggest that the cost of child care has been rising steadily for decades by a little less than half a percent per year.

My interpretation of this is that child care may not be in a sudden crisis right now, but at the very least it's a long-term time bomb. Unfortunately, it's all but impossible to get Republicans to care about it. I wonder how sure they are that their constituents are OK with this?

¹This is for preschool. The report and the database also cover infant care and toddler care.

Yet more astronomy today!

I'm a believer in multitasking, so when I go out to the desert I bring both my telescope and my camera. While the telescope is doing its thing, the camera might as well be doing something too.

Lately I've been playing with star trails, figuring out the best settings and exposure time. Here are a few tips:

  • Pick a very dark sky. A good star trail requires at least a four-hour exposure, and that means the gaps between stars need to be really dark or else they'll wash out the trails.
  • I've never had much use for my camera's long-exposure noise reduction feature because it doesn't really seem to do much. But it's the best solution for keeping noise down on a super-long exposure.
  • Unfortunately, it requires as much time for the camera to calculate and remove the noise as it takes for the main exposure itself. So a four-hour exposure becomes eight hours, and no camera battery will last that long. The solution for me is a little battery pack that I plug into the camera's power socket. Between the camera battery and the battery pack I can keep the camera going for ten or twelve hours.

The picture below turned out pretty well. The tree has a reddish cast, probably because my car's tail lights were on for part of the time. The road at the bottom is Highway 177, which forms the eastern boundary of Joshua Tree National Park. It's pretty deserted late at night, which becomes obvious when you realize that the headlight streaks in the photo are all I got during a four-hour exposure.

January 22, 2023 — Desert Center, California

I don't subscribe to Bloomberg so I can't read all of Tyler Cowen's column this week about artificial intelligence. But here's an excerpt from his blog. The topic is ChatGPT, an example of a Large Language Model:

I’ve started dividing the people I know into three camps: those who are not yet aware of LLMs; those who complain about their current LLMs; and those who have some inkling of the startling future before us. The intriguing thing about LLMs is that they do not follow smooth, continuous rules of development. Rather they are like a larva due to sprout into a butterfly.

I don't agree with Tyler about everything, but I sure do about this. And I suspect there are going to be some more converts when v4.0 of ChatGPT is released.

I always get lots of pushback when I write about how AI is on a swift upward path. The most sophisticated criticism focuses on the underlying technology: Moore's Law is dead. Deep learning has scalability issues.  Machine learning in general has fundamental limits. LLMs merely mimic human speech using correlations and pattern matching. Etc.

But who cares? Even if I stipulate that all of this is true, it just means that AI researchers are inventing new kinds of models constantly when the older ones hit a wall. How else did you suppose that advances in AI would happen?

In the case of ChatGPT I reject the criticisms anyway. It's not yet as good as college-level speech and perception, but neither was the Model T as good as a Corvette. It's going to get better very quickly. And the criticism that it "mimics" human speech without true understanding is laughable. That's what most humans do too. And in any case, who cares if it has "true" understanding or consciousness? If it starts cranking out sonnets better than Shakespeare's or designing better moon rockets than NASA, then it's as useful as a human being regardless of what's going on inside. Consciousness is overrated anyway.

The really interesting thing about LLMs is what they say about which jobs are going to be on the AI chopping block first. Most people have generally thought that AI would take low-level jobs first, and then, as it got smarter, would start taking away higher-income jobs.

But that may not be the case. One of the hardest things for AI to do, for example, is to interact with the real world and move around in it. This means that AI is more likely to become a great lawyer than a great police officer, for example. In fact, I wouldn't be surprised if it takes only a few years for AI to put lawyers almost entirely out of business unless they're among the 10% (or so) of courtroom lawyers. And even that 10% will go away shortly afterward since their interaction with the real world is fairly constrained and formalized.

Driverless cars, in contrast, are hard because the controlling software has to deal with a vast and complicated slice of the real world. And even so they're making good progress if you can rein in your contempt for their (obvious and expected) limitations at this point in their development. AI will have similar difficulties with ditch digging, short order cooking, plumbing, primary education, etc.

It will have much less difficulty with jobs that require a lot of knowledge but allow it to interact mostly with the digital world. This includes law, diagnostic medicine, university teaching, writing of all kinds, and so forth.

The bottom line, whether you personally choose to believe it or not, is that AI remains on an exponential growth path—and that's true of both hardware and software. In ten or fifteen years its capabilities will be nearly a thousand times greater than they are today. Considering where we are now, that should either scare the hell out of you or strike you with awe at how human existence is about to change.

Partly this is because we all get fooled by the Mercator projection used on most maps, which stretches the northern parts of Canada into something about the size of Russia. But it's also because nearly all of Canada is completely empty. About 99% of the population lives in the narrow red strip on the map below. That's the real Canada.

The Wall Street Journal reports on the status of remote work these days:

Remote jobs made up 13.2% of postings advertised on LinkedIn last month—down from 20.6% in March. Other job sites such as Indeed.com and ZipRecruiter also report declines in remote listings.

....Companies such as Walt Disney Co. and Starbucks Corp., meanwhile, are stepping up the days that hybrid employees are required to come into the office. Ally Financial Inc., based in Detroit, stepped up its return-to-office policy in September, shifting from asking workers to come in at least part-time to expecting it. How many days depends on the job and department.

Lots of workers might like having remote jobs, but as near as I can tell bosses almost universally hate it. For that reason remote work is nearly certain to continue shrinking, and if the economy goes into recession later this year it's likely to plummet.¹ By the end of 2024, I predict that the share of workers who are remote will be only slightly higher than a pre-pandemic trendline would have suggested. Call it 9% or so.

¹Because workers will be more desperate for jobs, which will give employers the leverage to hire only people willing to work out of an office.