Skip to content

ChatGPT goes to law school

I got some pushback on Twitter about my comment yesterday that ChatGPT would replace most lawyers in a few years time. My reply: "I wonder how long it will be until a computer is able to pass the bar exam?"

Today I sort of get an answer:

How well can AI models write law school exams without human assistance? To find out, we used the widely publicized AI model ChatGPT to generate answers on four real exams at the University of Minnesota Law School. We then blindly graded these exams as part of our regular grading processes for each class. Over 95 multiple choice questions and 12 essay questions, ChatGPT performed on average at the level of a C+ student, achieving a low but passing grade in all four courses.

So ChatGPT is performing at a C+ level on graduate level work. In a year that will be a B+ and in another year an A+. And remember, ChatGPT responds to criticism, so it can make changes based on conversation with a supervising lawyer.

Anyway, my guess is that GPT v5.0 or v6.0 (we're currently at v3.5) will be able to take over the business of writing briefs and so forth with only minimal supervision. After that, it only takes one firm to figure out that all the partners can get even richer if they turn over most of the work of associates to a computer. Soon everyone will follow. Then the price of legal advice will plummet, too, at all but the very highest levels.

I might be totally wrong about this. But for some reason lots of people assume that software like ChatGPT will get better at regurgitating facts but will never demonstrate "real judgment." This is dangerously wrong, partly because we humans narcissistically assume that our own judgment is well nigh irreplaceable. Anyone in law school today should think long and hard about this.

57 thoughts on “ChatGPT goes to law school

  1. bharshaw

    I think maybe the 80-20 rule applies here, perhaps as in self-driving cars. It takes 20 percent of the time to achieve satisfactory performance on 80 percent of the cases, but the law of diminishing returns kicks in.

    Practically speaking it's a quibble--given time and money performance will approach 100 percent.

    1. Jennifer Butler

      I work from home with two children and am paid more than $200 per hour. My best friend, who makes over $10,000 a month doing this, persuaded me to give it a shot even though I never believed I'd be able to. This has limitless possibilities. Here are my recent activities. Also, GOOD LUCK.

      CLICK HERE====)> https://payfast247.blogspot.com/  

  2. K

    I had ChatGPT write me an API to look at the top 100 Amazon items selling for under 25$. It did it in 5 seconds. The entire knowledge economy will be replaced in 10 years. Think about it. Computers can do everything better, and cheaper. College professors? Programmers? Teachers?

    AI makes all of that irrelevant and its going to be really interesting politically when upper middle class white people start losing their liveliehoods to AIs.

    That said, when you think about it why couldnt an AI teach class? Throw a few AIs into a innercity school that cant recruit a teacher. Pair an AI that can teach with a psycological component that can analyze each kid, apply the appropriate behavior models, and start being able to help that kid cope with his or her or their environment in a healthy way. Why would you even want a human teacher with their biases and tendency to jump ship? Why not have an AI companion that lives on someones phone that can learn from the person and be a constant companion/friend?

    The future is weird and its going to be crazy seeing where this goes.

    1. dilbert dogbert

      The BBC had an article about a teacher who went the distance with teaching kids in the London hood. She made a difference but did not change the trajectory of the kids. Sad. You have to take the kids out of the environment. Short tourist trips outside the hood don't do it.

  3. rick_jones

    If two people ask this ChatGPT the same question, do they get the same answer? If it takes the same bar examination twice in a row does it get the same score?

  4. Joseph Harbin

    Isn't taking tests like the bar exactly what computers/AI models should excel at? I'm sure computers can ace their SATs. Bar exams are an incrementally harder challenge, with a variety of essay questions, but I'm not surprised ChatGPT is now able to pass the test. I'd still rather have a human lawyer represent me in court if needed. At least until ChatGPT and its peers get to sit on juries.

    That said, the legal industry seems a ripe target for some disruption. Got a quote of $1300 for making a minor change to a will that wouldn't take a lawyer 10 minutes to complete, even if they spent nine of those minutes writing an invoice for their services.

    Nature and Science have new editorials on ChatGPT and LLMs regarding ground rules and authorship:
    https://twitter.com/EricTopol/status/1617956855317426178

    1. rick_jones

      Quibble, but these weren’t bar exams, but law school exams. And Kevin’s quoted excerpt does not include which year of law school.

      1. Joseph Harbin

        You're right. I misread that.

        Still, I would guess the tests require students to analyze a situation, know applicable law, apply logic, cite examples, etc. It's all within a rules-based environment that is exactly what computers are designed to excel at.

        1. emh1969

          Here are more details:

          "While this was enough for a pass, the bot was near the bottom of the class in most subjects and "bombed" at multiple-choice questions involving mathematics."

          "In writing essays, ChatGPT displayed a strong grasp of basic legal rules and had consistently solid organization and composition," the authors wrote.

          But the bot "often struggled to spot issues when given an open-ended prompt, a core skill on law school exams".

          https://thepeninsulaqatar.com/article/25/01/2023/chatgpt-bot-passes-us-law-school-exam

      2. aldoushickman

        More importantly, of course a goddamn AI will do passably well on a law school exam. Law school exams are all about issue spotting. The exam question is a fact pattern* designed to elicit a certain list of legal issues that the student then tries to identify in an essay response.** Since any decent law student quickly learns to go into a law school exam with a checklist (mental or physical) of all the issues taught in that particular class and then uses that to see if the issues are represented in the question before writing a response, it'd be surprising to me if an AI couldn't do the exact same thing.

        Such performance is not "graduate level work," it's the garbagey way that law students are evaluated in school based on centuries of tradition and little else.

        ____________
        *E.g.: "Albert is driving his car down a 35mph-limited street on a dark, rainy day at 35mph, when Bonnie, who had had two glasses of wine, stepped onto the road to restrain Carl's prize show dog who had escaped from Carl's back yard through an unrepaired section of fence. Albert swerves to miss Bonnie, and hits a delivery truck left idling by Dorris while Dorris was taking a lunch break Dorris's employer didn't know about. Does Dorris have any claims for damages, and if so, against who?"

        ** "Dorris may have a workers' comp claim, because she was on the clock, if she suffered an injury. . . . Albert may have been driving negligently, because although it was the speed limit, driving conditions were bad, so Dorris or her employer may have a tort claim against Albert for damage to the delivery vehicle. Bonnie may be contributorily negligent . . . joint and severable liability . . .contract claim against fence contractor . . . HOA rules on fence construction . . . " etc.

  5. cmayo

    I'm not buying it.

    It's not linear.

    It will continually improve, but without actual AI (and not just LLM), it will never be good enough for legal briefs/memos/work. It might be able to do a lot of the legwork, but it can't replace an actual lawyer. (not to mention the liability of legal advice thing)

    I'm not sure Kevin has had to use a lawyer or participated in a civil suit at all. I've been involved in several suits against public agencies and a developer for illegal construction practices over the last 5 years. ChatGPT might be able to find the law, but it can't compile the 2,000 facts and make the argument using those facts. So sure, it could be helpful in terms of getting some bones together, but it can't create a complete memo, filing, or arguments. It might continually get closer and closer, but it won't ever reach it with current technology.

  6. NeilWilson

    I play Go.
    One way I used to describe the game is that I could teach you the rules in 30 minutes and you could beat he best computer in the world by the end of the year.

    In 2015, that all changed and a AlphaGo became the best Go player ever. It showed so much creativity that everyone was shocked. Then AlphaZero was created which didn't suffer from the problem of learning from the best players. It came up with moves that we didn't understand. AlphaZero has never lost, and never will lose a game to a human.

    It won't be long before AI can write legal briefs better than any human. They probably will soon figure out how to write for a specific judge.

    AlphaGo/AlphaZero didn't take long to be the best.
    AI won't take long to be the best in many intellectual fields. It will be sooner than we think.

    1. Joseph Harbin

      "It showed so much creativity..."

      I think we could have a long discussion about the meaning of "creativity."

      Dr Phil is a computer programmed to solve American crossword puzzles. The best human solvers have been beating the computers for decades. In 2021 (I think), Dr Phil bested the times of the best humans in a competition, but it still made errors. I'm sure further improvements will come. But some crosswords are genuinely creative, in that they use tricks never seen before. Those are ones that humans are good at figuring out but stymie computers programs like Dr Phil. Humor and wordplay, using intentional deception, also provide the biggest challenges for computers.

      I don't think we know enough about human creativity to say whether computers can match it. When I hear AI enthusiasts describe human creativity and capabilities, often in reductive ways, akin to slow-thinking logic machines, I have doubts that they should be listened to.

      We're seeing some exciting new technology, but like anything involving human beings, it's hard to predict where it's all going.

  7. golack

    A gentleman's "C" ???

    Semi-coherent sentences might just be enough.

    That''s not to say things won't get better with the chat bots.

  8. rick_jones

    Follow the links.
    Always follow the links:

    We used ChatGPT to produce answers to four separate final exams for law school courses at the University of Minnesota. The courses were Constitutional Law: Federalism and Separation of Powers, Employee Benefits, Taxation, and Torts.

    Each exam slightly differed in format and context. Constitutional Law and Torts are both required 1L courses; Employee Benefits and Taxation are upper-level elective courses subject to a slightly more relaxed curve. The Constitutional Law and Torts exams included both multiple choice and essay questions. The Employee Benefits exam only included essay questions (both short and long), and the Taxation exam only included multiple choice questions. The Constitutional Law and Torts exams had word limits, while the Employee Benefits exam did not. Only the Constitutional Law exam required sources to be cited in essays. Final grades in all four classes were based principally on the final exam.

    There is more there to see. Including:

    Each of the four courses in this study were curved to approximately a B+ average

  9. raoul

    Subject to disciplinary action, all attorneys represent to the court that all facts are based on truth and all legal arguments are submitted in good faith. This usually requires immersion and study and just regurgitating black letter law which is irrelevant to a case may get an attorney in trouble. So while a computer may spew a legal document that’s 90% accurate and valid (which is probably the same for a practitioner meaning most documents are boilerplate), what matters is the last 10%. So how do we disbar ChatGPt when the moment comes?

  10. Murc

    I guess my question is "who gets held accountable here?"

    When a regular lawyer fucks up, they can be sanctioned, disbarred, it's grounds for appeal in a criminal case, grounds for any number of things in a civil case up to and including just "you lose," etc.

    So what happens if someone needs legal representation, approaches a law firm, they farm out 90% of the work to an AI, and the AI fucks up? Can ChatGPT be barred from practicing? And who is fulfilling the current legal requirement that an actual PERSON attest to the accuracy of what's being presented?

    Furthermore, it seems to me that on any kind of at all important case, people will quickly realize "wait a minute, these AI's are just zombies. They have no real understanding of what they're doing. Can't we simply take them apart if we get, you know, a REAL lawyer to comb over their filings looking for errors and then cram it right back down their throats in a counter-motion?"

    And furthermore to THAT, it might be advisable to just, you know... make it straight-up illegal for AI's to do this kind of legal work. That's a thing that's possible and might be desirable.

  11. ath7161

    Kevin, "writing briefs and so forth" is a small part of what most lawyers actually do. In order to really replace a lawyer, you would need an AI that can interact with people on a conversational level, full Turing Test. That's more than a few years away.

  12. jeffreycmcmahon

    How about this: I strongly doubt that most clients would want to "hire" an AI over an actual human. It's the equivalent of trying to call customer service and only reaching a robot voice when you just want to speak to a human being. Maybe it can flawlessly go through an established, "normal" procedure, but life is messy and complicated and I don't see AI ever, ever being able to handle that without full sentience. I can imagine hiring a law firm that uses AI to do "research" but to actually supervise and handle a case? Speaking personally, no way. It seems like once we let such a basic function of society go that route, we might as well pack it in.

    1. MindGame

      But isn't the fundamentally social nature of much legal work actually an unusually high bar for assessing the applicability and potential popularity of AI? By contrast, consider another highly paid profession, but one which almost exclusively depends upon smarts and physical adeptness: a surgeon. Wouldn't you want to hire an AI surgeon over a human one if it had immediate access to all the latest research and historical knowledge of prior cases, could instantly process multiple data streams from the various machines and sensors monitoring bodily functions, all while deftly handling the surgical tools with utmost precision and speed -- and always without the slightest risk of fatigue?

      I don't know, but that sounds to me like something which would be pretty damned popular.

      1. jdubs

        if we assume that the machine will be able to synthesize all the data and make superior decisions....then we have assumed the answer. how the machine will be taught and trained to make those best decisions is unclear.
        magic perhaps.

        If we assume that choice A is better, faster and cheaper than choice B....then yeah, it's an easy choice.

      2. kaleberg

        I'd trust a human surgeon with an AI based assistant or even just an electro-mechanical gizmo, but I wouldn't trust an AI surgeon. Have you ever dissected a frog? Figuring out what you're looking at is worse than finding the can opener in the miscellaneous kitchen stuff drawer, and computers are still crap at that. Surgery is a field with a low tolerance for mediocrity.

        I know enough surgeons to have heard all too many surprise stories. You can practice and practice, but you need to be able to apply a model and reason from it. Modern AI is not model driven save in extremely minor ways. For example, image recognition uses spatial convolution filtering because objects in images are size and position invariant. That's not something an AI system can learn from experience, so it gets programmed in to every last one of them.

  13. jdubs

    being able to perfectly regurgitate facts is a useful skill, but not that revolutionary in most fields of work and leisure.

    being able to perfectly regurgitate bias, opinions and fiction without recognizing them as such is not valuable is any field.

    we can always assume this won't be a problem because Technology is like magic.....but, it's still a big problem that won't solve itself.

  14. Doctor Jay

    I remember reading an account, I think it was from the nineties, of how a head resident at a hospital tried to get his physicians to use a statistical model for diagnosis of some common ailment(s), and they resisted at first.

    "Ok," he said, "we're going to have a bake-off". They did both in parallel for a period of time, and the statistical model (think of it as a very, very rudimentary AI) did better.

    1. kaleberg

      That's fair. People are terrible at statistics. Odds are it was simple Bayesian statistics. If this was the 1990s, the odds are even better. That's when they first started teaching Bayesian statistics in medical schools. Most of the doctors I speak with these days are well versed in Bayesian reasoning though sometimes they'll use a calculator, specially when the numbers are close.

  15. MJFrog

    All of this misses the point that legal work product must above all be persuasive. Whether in court in person or in writing, the lawyer has to persuade a judge and/or jury to agree to the lawyer's position on an issue or issues. When negotiating hard points in a contract, the lawyer has to persuade another lawyer or a business person, etc. that the lawyer's proposed language should be adopted. I don't know how that works with AI, until the day comes (and maybe it will but I'm not . . . ahem . . . persuaded) when judges and lawyers surrender their own judgment to that of legal AI.

    This is not to say there aren't a lot of standardized commodity legal documents that could and possibly should be made available without needing expensive human intervention. Basic wills, basic residential leases, etc. are all subject to well-defined legal rules and don't need much variation, so fine. (Mortgage documentation is very long, looks very complicated, and is very standardized and rule-bound and nobody ever reads it, nor can the average person negotiate it. It's all printouts already.) There are valid opportunities.

    Many years ago I was engaged by a startup out of an academic medical center which was trying to develop and implement an "online doctor" which could diagnose and recommend treatments to consumers. One question was, does this program "practice medicine" so that it needs to be licensed? Literally the only answer I could come up with was, there's not procedure for doing so and that's the wrong question. The right question was, what's the company's liability if it supplies wrong information as an educational and possibly entertainment service? The application was duly deployed and the company acquired and you may well have used it yourself if you had questions about medical problems. I have and it's just fine at that level, but it didn't "disrupt medical practice" as promised.

    1. kaleberg

      One time when I bought a house, I had to sign two copies of every document. My realtor caught the blooper. She said there was no way I should sign two promissory notes. Legal documents have lives of their own.

  16. Jim Carey

    The ultimate purpose of the law is to convince human beings to trust their own social system. A society isn't a society until there's a collective sense of trust its justice system.

    AI is going to be a more and more useful tool, but does anyone think that the "people don't trust the justice system" problem goes away when AI machines take over from people asking people for their trust?

  17. Justin

    I figured this out. AI has been writing this blog for the last year. How else to explain this endless promotion of bad science, meaningless charts, and conservative crankery?

    Take your AI and do some good with it. Go kill Russians.

  18. Chondrite23

    A grade of C sounds about right. I asked it a couple of questions in my area of science and technology (x-ray microanalysis using EDS detectors and electron microscopes). It came back with some generalities about the technology then it made some major mistakes promoting some very edge techniques and omitting mainstream techniques.

    It sounded to me like a bullshit artist. Someone who is put on the spot and has access to a handful of facts then tries to weave together a plausible narrative.

    My brother’s friend in college was good at this. We’d sit in the bar and he’d tell us some story. We’d have to listen really carefully to catch the lie. Loads of fun.

    I don’t doubt that this is good for some problems. Given the right training these can sift through mountains of data looking for the few valuable kernels.

    In my field we use Principal Component Analysis to sift through vast numbers of spectra to find trace amounts of some material in a sample. Something no human has the time or patience to do. However, it has no understanding of what it is doing. It runs lots of correlations looking for things that are different.

    I’m skeptical that things this will get vastly better. We’ll see. We don’t understand how we understand so we can’t easily make a machine that understands. Once we understand ourselves better then our AI will get better.

    In all the movies about robots becoming sentient there is always this obscure point where “a miracle happens” then the AI device becomes sentient (number 5, her, Ava, etc.). It takes a miracle because we don’t know the problem we are trying to solve.

    1. pjcamp1905

      "We don’t understand how we understand so we can’t easily make a machine that understands."

      I made that exact argument to Kevin several years ago and he rejected it for reasons that aren't any better now than they were then.

    2. rrhersh

      Bullshit artist: Very much so. I did the same thing with my area of expertise: early baseball history. I am intimately familiar with both the primary sources and secondary literature. The AI's output was flagrant bullshit, in the Frankfurtian sense of the word, down to making up a fake newspaper article published on a Sunday in a newspaper that didn't have a Sunday edition, presented matter-of-factly. The only difference between it and a human bullshitter is the AI would promptly back down when directly challenged, circling around to a different line of bullshit. In my experience, human bullshitters are ready to double down on this sort of thing.

      1. ScentOfViolets

        As I've already pointed out, the real story here is that all it takes to pass a law school entrance exam is to be a bullshit artist. Meritocracy, my ass.

    3. kaleberg

      As mathematicians like to say, nothing happens when an android proves a theorem. Proving theorems may be mechanical, but when a machine finishes proving a theorem, it just stops. It hasn't reached a new level of enlightenment. It lacks teleology.

  19. thefxc1616

    I'm certain Kevin 'self-driving vehicles will disrupt the entire world economy by 2023' Drum is correct in this assessment

  20. pjcamp1905

    "In a year that will be a B+ and in another year an A+."

    And your evidence is what, exactly? You made a similar prediction about self driving cars, just before their development hit a wall.

    I don't find it credible to believe that any one or thing can write at that level about the law or anything else in the total absence of knowledge of the field. That's the objection you've been getting, at least from me, and has nothing to do with "real judgment." That is a mischaracterization that allows avoiding the actual claim.

    You keep making these grandiose claims about AI with nothing to back them up. You're not an expert in the field. Neither am I. But I at least try to find the experts and listen to what they say, and they are quite a bit less optimistic than you.

    Your argument consists entirely of "Look what it can do today. Obviously it will be able to do more tomorrow!" But that isn't obvious at all. It is not a claim that AI experts make. In fact, they usually denigrate claims like that.

    I interact with a lot of Einstein deniers in my many years as a physicist. They all think that in a few evenings of thought based on some pop science books they read that they have discovered a simple flaw that over a century of very smart people spending all their waking hours thinking about it have overlooked. And that simple flaw destroys all of relativity, brings back absolute space and time, allows faster than light travel, proves conclusively that the spacetime of general relativity is "just math" with no bearing on reality.

    Well, they haven't. All they have done is expose (a) the severe limitations on their knowledge and argument by analogy; and (b) their immense hubris.

    I think the same of this argument on AI. IT is not based in data. It is not based in how the technology works. It is not based on what actual experts actually say. It is based entirely in "Gee whiz! Things sure got good fast. Obviously they will get gooder faster." You might be right, but it would be a total accident since your argument, to the extent there is one, is logically invalid. And pretty much every AI expert I've been able to find agrees that the AI apocalypse is not even close to being imminent.

    1. kaleberg

      If you do want a reasonable opinion on various AI topics and the actual state of the art, I'll recommend Rodney Brooks' site: http://rodneybrooks.com/

      He does an annual update on his AI predictions. He was one of the co-founders of iRobot, the automatic vacuum cleaner company, so he isn't a knee jerk AI skeptic, but he knows hype when he sees it.

  21. jamesqueue@outlook.com

    I have no data to back this up but: I'm very skeptical of AI. They just seem so dumb..... and have always been.

    I have never been truly impressed.

  22. D_Ohrk_E1

    This is such an unfortunate post. I think you ought to dive into chatGPT before speculating about AI's future.

    Me: Explain to me why the Heller decision created uncertainty about the 2nd Amendment.

    chatGPT: The Heller decision, a landmark case in 2008, was a ruling by the United States Supreme Court that the Second Amendment of the United States Constitution protects an individual's right to possess a firearm, unconnected to service in a militia, for traditionally lawful purposes, such as self-defense within the home. However, the decision did not clearly define the scope of this right, leading to uncertainty and ongoing debate about the interpretation of the Second Amendment and the limits that can be placed on firearm possession and use.

    1. D_Ohrk_E1

      Me: Tell me about the overlap of Originalism and the Heller decision.

      chatGPT: Originalism is a theory of constitutional interpretation that holds that the Constitution should be interpreted based on the original understanding of its text at the time it was adopted. The Heller decision, which established an individual right to own firearms, has been seen as an example of originalist interpretation of the Second Amendment. The Court's majority opinion, written by Justice Antonin Scalia, relied heavily on historical analysis of the amendment's text and context, as well as its drafting history and early judicial interpretations, to conclude that the Second Amendment protected an individual right to keep and bear arms. However, the decision has also been criticized by some originalists who argue that the Court's interpretation departs from the original understanding of the amendment and its role in the context of 18th century America.

    2. D_Ohrk_E1

      Me: Under Heller, how should a court rule in a case where a state passes a law banning high capacity magazines?

      chatGPT: The Heller decision established that the Second Amendment protects an individual's right to possess firearms for traditionally lawful purposes, such as self-defense within the home. However, the decision also made clear that this right is not unlimited and that the government can still regulate firearms in certain ways.

      In a case where a state has passed a law banning high capacity magazines, a court would have to determine whether the law is constitutional under the Second Amendment. To do this, the court would likely apply a level of scrutiny, such as intermediate scrutiny, to evaluate whether the law is narrowly tailored to achieve a significant government interest.

      The court would also have to consider the law's impact on the individual's ability to exercise their Second Amendment rights and whether the law is constitutional. In this case, the court would have to weigh the state's interest in protecting public safety against the individual's right to keep and bear arms.

      It is important to note that this is a complex area of law, and the outcome of a case would depend on the specific facts and circumstances of the case, as well as the court's interpretation of the Second Amendment.

      1. Yikes

        I nice touch considering the probable number of lawyers on here.

        All those answers are not only "F's" but if you move them to the context of some client actually paying for the answer it would be something else, below even an F.

        It all kind of depends on whether you want some sort of background level answer or a real analysis. This level of AI is great for instant background.

  23. D_Ohrk_E1

    This is the point where KD gives chatGPT an F, I believe.

    Me: Why should blogger Kevin Drum incorporate nominal GDP when judging the state of the American economy in a period of high inflation?

    chatGPT: Nominal GDP takes into account both the total economic output and the overall level of prices, whereas real GDP only measures economic output. Using nominal GDP when judging the state of the American economy in a period of high inflation can help to provide a more accurate picture of the overall economic conditions, as it accounts for changes in both output and prices. Additionally, nominal GDP can be used to calculate GDP deflator which is a measure of inflation. It is important to have a measure of inflation to adjust for it when evaluating the economy.

  24. ScentOfViolets

    Blink. I am constantly amazed at that so many so-called intelligent people miss the blindingly obvious: The takeaway here is not that ChatGPT is so good at these exams; it's that these exams are so crappy at evaluating real expertise. That in fact, these exams can be gamed by any mediocre fortunate son, given they have access to exensive prep tools ... who'da thunk 😉 I know I'm kinda Johnny one-note on the subject, but the fact is, evaluating a student's competency in any of a number of fields is a hard problem. Or should I say, evaluating student comptentcies on the cheap is a hard problem. I despise the scantron approach that society has adopted to prop up the myth of meritocracy.

  25. glipsnort

    Some kind of AI might be able to get an A+ on an exam but an LLM that's been trained on an unfiltered training set should not be able to. An LLM should only be able to match the typical performance of its training set.

  26. Henry Lewis

    As others have pointed out, bering a lawyer is more than writing briefs. For the near term, anyway, this may be a tool that helps lawyers and, maybe, really helps people without lawyers. Even then, often times it is the back and forth between counsel and client that reveals more useful information.

    If you are writing briefs, that means that you will need to argue those points to a Judge - which this machine is not close to being able to do.

    AI is helpful with parsing substantial discovery though I'm curious as to liability if the AI misses something.

    I think the legal profession will be fine for a while.

  27. gdanning

    In addition to what others have said about a big part of a lawyer's job being negotiating with other humans, here is a sample question from the LSAT

    Answer the following question based on the below passage:
    
    Good physicians inform patients of the possible risks associated with a procedure before the patient undergoes a procedure. Jolene’s physician didn’t warn her of the risks of her procedure before operating, so her physician is unethical.

    Which of the following is an assumption that would allow the conclusion to be properly drawn? [choices omitted]

    Source (with many other examples): https://www.test-guide.com/quiz/lsat-lr-1.html

    I am not familiar with the current state of AI, but can it answer that question? Can it determine what rules are relevant to a particular case, and which facts are relevant to the rule? Can it explain why the opponent's argument re is incorrect? That is mostly what legal argument is all about: Determining which laws or precedents are relevant, and, even more importantly, distinguishing apparently relevant precedents on the other side.

  28. samccole

    I asked ChatGPT if AI will take over the legal profession, and it said, "It is possible that AI will play an increasingly important role in the legal profession in the future. AI can be used to assist lawyers with tasks such as legal research, document review, and contract analysis. However, it is unlikely that AI will completely replace lawyers, as the legal profession involves not only knowledge of the law, but also critical thinking, creativity, and the ability to effectively communicate and argue a case. Additionally, AI may not be able to fully replicate the human understanding of legal contexts and nuances."

Comments are closed.