Skip to content

GPT-4 will (almost) be your next doctor

You've all seen plenty of punditry about GPT-4, almost all of it based on the generic version available to plebs like us. But that's nothing. There are also dozens of companies that have been building specially trained versions of GPT-4 for different industries ("vertical markets," or "verticals," if you want to sound like you know what you're talking about), and those are really going to be impressive. I'm not sure how long product development takes for this kind of thing, but sometime in the near future we're going to be flooded with specialized GPT bots.

One of the most obvious verticals to go after is health care. Tyler Cowen points today to a review of The GPT-x Revolution in Medicine from Eric Topol, and the money quote is obviously this:

“How well does the AI perform clinically? And my answer is, I’m stunned to say: Better than many doctors I’ve observed.” —Isaac Kohane MD

But Tyler thought this bit in particular was "hilarious":

I’ve thought it would be pretty darn difficult to see machines express empathy, but there are many interactions that suggest this is not only achievable but can even be used to coach clinicians to be more sensitive and empathic with their communication to patients.

The humor here is obvious, but in reality it's nothing to laugh at. The plain fact is that simulating empathy is trivially easy. Politicians and con men do it all the time, and not in especially sophisticated ways. Most of us want to believe that people like us, so we're easily fooled by fake empathy.

On the upside, this will make GPT-ish software a perfect companion for the elderly. Feigning empathy is mainly a matter of extreme patience combined with modest insight into human nature, and GPT-4 has both. A GPT companion for folks in nursing homes—or who are just lonely for any reason—will be a huge hit.

On the downside, gaining the trust of vulnerable people also poses obvious dangers. In the hands of people who like to scam the elderly over the phone it's likely to create havoc.

And for health care more generally, it's likely to become wildly popular. It isn't ready for prime time yet, so hopefully specialized diagnostic bots won't be turned loose on the internet for anyone to use. But in a doctor's office it will be gold, especially if it can be hooked up to high quality voice recognition and speech synthesis. Unlike doctors, who have limited time, a bot can listen to you recite your symptoms for as long as you feel like and then pass them along in summary form to the doctor. The doctor can absorb this quickly, ask a few more questions if necessary, and then pass judgment on the bot's recommendations.

The bot can do its part in any language. It can easily adjust to the personality and preferences of the patient. If its voice retains a bit of its robot heritage it will probably make many patients feel easier about revealing embarrassing details. Add a camera and some imaging capability and it will be able to examine sores or lesions or what have you. And of course, the bot has access to far more knowledge than any human doctor. It can be GP and specialist all rolled into one.

There are drawbacks too, which is why bots have to work with human doctors, not replace them. Right now, GPT's most famous drawback is its habit of "hallucinating," otherwise known as making stuff up. There are probably ways to minimize this in specific settings, but obviously doctors who use GPT have to be keenly aware of this.

Now multiply this by dozens or hundreds of different settings and GPT is set to revolutionize the world. Not instantly, but within a few years. It's not too soon to prepare.

64 thoughts on “GPT-4 will (almost) be your next doctor

  1. morrospy

    I continue to be less drastically inclined about these products. Moore's Law being dead is still a problem.

    But I use GPT-4 daily. It's great to defeat writer's block. It can give you a nice draft of code to get started, and some ideas about math problems...

    But WolframAlpha or not it still can't give me accurate math results if it's not just a calculation.

    I would worry if I was a digital graphic artist. Otherwise, we're a long way away and whether the hardware becomes available or not any time soon is an open question.

    1. coral

      I went through bot hell the other day with Verizon. God forbid my doctor's office should turn to this. The phone menus at dr are already infuriating.

  2. Chondrite23

    It would seem fine as an assistant, something to summarize specific inputs. My problem is being able to trust it as an expert.

    We had a software feature in a scientific program that didn’t always work well. One friendly customer gave it a backhanded compliment “It is often correct.”

    You don’t want important things to be “often correct.” A feature needs to be always correct (like a calculator) and if it is not perfect then you need error bars or some other way to express a known lack of precision. The AI feature needs to be able to say “I don’t know”.

    1. civiltwilight

      Great. Tonight I will have nightmares about being in an old folk home with the robot from "Moon" - the one Kevin Space Spacey did the voiceover for.
      Thanks, Drum.

  3. duncancairncross

    Many years ago - back in the 90's - there was a simple expert system that in tests significantly outperformed GPs in diagnosis
    Mainly because it remembered everything and could check against a wider database
    But nothing came of that!!

    I hope today is better

    1. golack

      Even just a simple check list helped a lot...but....

      The biggest issue with automated medicine is that their is a lot of bias in the system--and that makes it into the program or AI. Then if there was a problem with a diagnosis--do you blame the doctor or the program? Who gets sued? Until that gets worked out, roll outs will be limited.

  4. cooner

    "It isn't ready for prime time yet, so hopefully specialized diagnostic bots won't be turned loose on the internet for anyone to use."

    But this is like 90% of my biggest worries about so-called "AI." Big advances may or may not be around the corner (from reading experts I tend to lean towards the latter), but there are so many companies, billionaires, researchers, and tech-bro's with an obvious interest in pushing this technology, and so many moon-eyed journalists and politicians gobbling it up, that the current technology is already being oversold and misunderstood. There's a near-100% chance it's going to be shoved into industries and situations where it's not up to the task. At best needlessly putting people out of work (because of COURSE corporations are going to lay off people as soon as they have a bot they can claim does the same work), and at worst causing actual harm with inaccurate, made-up information.

    But yeah, wow, awesome, cool.

  5. ronp

    wow, hopefully the LLM bots can help us out. I think they will but on the other hand I think Kevin has read too many science fiction novels (maybe not the darker ones LOL).

  6. sdean7855

    "The plain fact is that simulating empathy is trivially easy. Politicians and con men do it all the time, and not in especially sophisticated ways. "
    Unless you're Ron DeSantis, in which case you have the empathy and affect of a cheap toaster oven. Really. He comes across like a mortuary supply salesman peddling embalming fluid.

    1. Joseph Harbin

      The problem with DeSantis is that he's not a toaster oven. Some reliable sources claim he is in fact a human being, and human beings are expected to be capable of empathy, so his complete lack of empathy comes across as a deficiency. His attempts to fake empathy comes across as fake. It's one reason he'll never be president. No one trusts a human devoid of basic human emotions.

      The problem with AI is that in important ways it is like a toaster oven. It is a machine. Machines do not have empathy, and when designed to "express" or "simulate" empathy, the astute human recipient knows that the empathy is fake.

      That is one of the insurmountable obstacles in the way of some people's visions of what AI has to offer. If AI can provide information and perform work to aid humans, it will be a valuable boon to humanity. If we use AI to replace human relationships and provide fake emotions, we are likely headed down the wrong path and need a course correction pronto.

      Interesting development today. An open letter in favor of a "pause" on AI research beyond GPT-4 to give us time to adapt to our brave new future.

      Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.

      Signatories include Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari, and more than 1,000 tech leaders and others.

    2. Punditbot

      Or DeSantis is exactly like President Scott Walker. A charisma black hole so dense that not even the tiniest bit of empathy can escape.

  7. kaleberg

    From what I've seen, ChatGPT and the like have the same problem as self driving cars. They do a lot of the easy stuff pretty well, but a human has to stay in the loop or at least be ready to jump into the loop when the going gets tough. The problems come, for example, when there's an emergency vehicle parked on the road or someone is presenting heart disease in a non-standard manner.

    Doctors already have a problem with patients who come to them with a diagnosis by Google. Sometimes, the diagnosis is correct, but it is often incorrect, and the standard Google prescribed treatment will often make matters worse People are suing doctors because they didn't get treated for COVID with ivermectin. Is ChatGPT going to do much better? People are going to game chatbots the same way they game Google. I seriously doubt this is going to get us better medical care.

    (Worse, the health insurers will crank up their own medical chatbots and use them deny coverage to people with serious problems. They can back this up with pages of glib analysis full of legal hems and haws. They are already using "algorithms". Why not an AI algorithm? Good luck suing, especially if you are stupid enough to try it with a chatbot lawyer.)

    1. Austin

      I don’t know that anyone’s planning to use AI for emergency response situations. It sounds like Kevin was merely talking about AI being used to replace a lot of GP appointments (or at least act as an intermediary during them). Most people don’t show up at their GP with emergencies… they show up with (1) known chronic issues that need long term maintenance, (2) weird things that aren’t necessarily dangerous but just have pestering them for weeks/months so they finally decided to get them checked out or (3) routine physicals. AI probably would do fine at all 3 of these tasks.

      And people game human doctors too to get their ivermectin or whatever. It’s not like all human doctors are infallible: I hear of plenty that violate medical standards like “don’t hand out antibiotics like candy because you’ll reduce their effectiveness” all because a patient pleads with them long enough and/or they’re just in it for the cash.

  8. Justin

    I just don’t see it. Even all the automation of the last century didn’t accomplish this…

    “This large transformation is the opportunity to free humanity from the need to work. People will work when they want to work on what they want to work on. That's a utopian vision. But getting from here to that utopia is really disruptive and it is terrible to be the disrupted one. So you have to have empathy for whoever's being disrupted. And the transition is very messy. It hurts people, hurts lives, destroys lives.”

    Consumers have to choose this. If people are put out of work, then they can’t afford to use these services. while this may be good for the super rich, it will suck for 99.5% of humanity. And they won’t like that.

    A Police state is the result? Thanks!

  9. royko

    "The humor here is obvious, but in reality it's nothing to laugh at. The plain fact is that simulating empathy is trivially easy. Politicians and con men do it all the time, and not in especially sophisticated ways. Most of us want to believe that people like us, so we're easily fooled by fake empathy."

    What's even funnier is a lot of doctors are not known for their empathy. Sure, they could fake it, but they're doctors, so why bother?

    (There are lots of doctors with wonderful bedside manners. But not so many that the profession should feel secure that AI won't do better.)

  10. cld

    Given the number of completely vile doctors I've had GPT-4 would be a revolution.

    Especially for people who live in remote areas, either Idaho or the Amazon, or have mobility issues it could be telemedicine for the masses.

    1. Steve_OH

      I gather that you have not been to remote areas of the Amazon. You can usually get a decent cell phone signal in a reasonably-sized town (say, 10,000 people or more), but a few km out there is no tele-anything. If it's a community, or a research station, or something like that, there might be a shortwave radio.

    2. Citizen Lehew

      Man, I'm so ready for one of those medical scanning booths from the movie Elysium. A quick scan, out pops a bottle of pills, on your way.

  11. kenalovell

    How smart does it have to be to say "Get more regular exercise, lose 10 kilos and cut down on your drinking?"

  12. realrobmac

    From my experience most general practitioners are basically simple expert systems as it is. They basically do the following:

    * Ignore patient's medical history
    * Tell patient to get blood work done
    * Based on numbers in the blood work tell patient to take prescription drugs or vitamins
    * Tell patient to take various expensive and possibly heath-harming tests (CAT scans, x-rays, probes, biopsies, possibly sonograms or MRIs)
    * Based on results of tests tell patient to take prescription drugs or recommend surgery
    * Remind patient to get regular (and expensive) tests (mammograms, colonoscopies, etc.)
    * If anything gets complex, send them to a specialist
    * Spend a maximum of 10 minutes on patient
    * Bill insurance company
    * Repeat

    So a trained GPT-4 could hardly do worse. But the AMA is powerful and protects its members' right to become millionaires, so I think the medical profession is probably the safest one out there.

    1. skeptonomist

      Automated diagnosis could have been done a long time ago, or at least started, but I don't know off hand of any major efforts. A real national health service might be necessary to implement this. Instead we get chat bots that seem to be designed to make people think they are human.

      Your family doctor is not going to pay for a machine to replace her/him and the AMA is presumably opposed. I have not seen anything one way or the other from the AMA about this.

  13. realrobmac

    Where I think we will see GPT quickly take over is in first level tech support and customer service. And also scams and catfishing.

  14. J. Frank Parnell

    Sounds like a perfect application for GPT-4. Something like about 90% of medical cases a routine and straight forward, the kinds of things GPT-4 can deal with without getting bored out of their mind. The remaining 10% are more complicated and need to be kicked up the chain, in theory the same way they are now.

    Empathy? Physicians, particularly the ones of my older generation, are not known for empathy, and for good reason, many of them are overtrained and bored stiff doing routine stuff that GPT-4 could do better.

  15. cmayo

    I'm fairly certain that machine-assisted diagnosis and evaluation is coming. There is already research out there that "AI" is better at diagnosing some things earlier and more accurately than human experts. It has the benefit of perfect recall of a much more enormous data set than a human's brain can handle.

  16. steve22

    Voice recognition needs to work very well otherwise you will have to spend too much time inputting info for GPT for it to be super helpful. You could set it to read your notes and lab tests and make suggestions. It wont be able to do a physical exam and the stuff you can do to try to compensate for that will be expensive and still not quite replicate. Will need a human doc around. Also need to have less downtime than we currently do with EMRs.

    So as an aid/adjunct this has great potential if we solve a few issues. I think we could probably trial it as a solo entity in a few years. I am much less certain about its acceptance by pts, especially older ones. No idea how it will handle pts that lie or withhold info.

    Will be interesting to see what it does to costs. EMRs were largely invented to improve billing ie increase costs. Could see that happening with GPT.

    Steve

  17. Dana Decker

    This means we will need fewer workers in all areas of commerce and medicine. So can we stop with the "we need to bring more people into this country" nonsense?

  18. NealB

    The way modern western medicine works it's not hard to believe that robots could do better. But hard to believe that committed human health care professionals could be replaced. It's not about the cure as much as the care. Mechanicals can't do it. Either way.

    1. NealB

      Empathy is a tough nut to crack. Before we rely on software, we should probably evolve to the point where at least most of us understand what it means.

  19. KenSchulz

    >It's not too soon to prepare.
    It’s on my to-do list. Along with preparing for cheap, abundant fusion power. And flying cars.

  20. D_Ohrk_E1

    The only thing available to us plebs is ChatGPT based on the GPT-3.5 model. To access the 4.0 version you need to be a paying customer.

    Ask ChatGPT yourself. "Are you ChatGPT 3.5?"

    I can see LLM being useful as a built-in plugin to mark a huge leap over AIBO. If it can understand you, it can respond better to you. That's exactly the kind of stuffed or robotic animal pet most humans would want.

    As for replacing doctors, that seems like a stretch for a LLM. It's still a tool, not a replacement.

  21. D_Ohrk_E1

    KD, why don't you use ChatGPT to write you some code, maybe a JS file, so that it'll automate the process, perhaps twice a day, of going through your comments and eliminating posts from bots/scams?

  22. frankwilhoit

    The purpose of this, as of all such things, is to prevent the assignment of blame when mistakes are made.

  23. jvoe

    I think an AI designed to intercept phone calls to the elderly, sift them, and decide if they should be passed along, would sell like hot cakes.

    If you take care of an olds who will not give up their land lines, then you know that their phones do not stop ringing with constant bullshit. I would pay to get mine a buffer. Let's call it 'Sentinel'.....

    1. Austin

      It’s mostly because (1) the elderly still give out their real phone numbers to everyone who asks and (2) the elderly still pick up every ringing phone even when they don’t recognize the number, allowing the caller to know that this number is active and a good candidate for selling to others to also call.

      I’ve trained 2 of my aunts to never do (1) or (2) - give grocery stores and anyone else who you don’t care to really get calls from a fake number, and never pick up the phone if caller ID doesn’t tell you a name you recognize - and their random calls have gone down dramatically over time.

      1. jvoe

        Yep, we've done similar things. I think the biggest source are the charities sharing phone #s and eventually ends up in the hands of bad actors.

  24. KenSchulz

    >Feigning empathy is mainly a matter of extreme patience combined with modest insight into human nature, and GPT-4 has both.
    No, it has neither. Just as airplanes don’t need to flap their wings, GPT-4 doesn’t need patience or insight; instead, it simulates patience by lacking a stopping rule; everything else is done with a massive database of language samples plus algorithms to extract statistical properties and accept/derive grammatical rules.
    Anyone who is surprised that GPT-4 anecdotally outperforms human doctors needs to read Robyn Dawes’ “The Robust Beauty of Improper Linear Models in Decision Making”.

  25. ScentOfViolets

    I see a few people who seem to be all-in on the notion, or at least, they seem to have a low opinion of medicine as it is currently practiced. My question is, who do you sue if the Chat-bot misdiagnoses you? As opposed to, say, the doctor who misdiagnoses you?

  26. sdimond

    The medical community already uses algorithms to guide treatment. They are called: "standards of care." There are also various risk predictors used to guide treatment. The risk predictor used by the AHA and ACC to predict cardiovascular risk has spectacularly poor accuracy. It misses more than a third of those who are actually at risk while, if it indicates you are at risk, it is correct only one sixth of the time. When your doctor assures you that you need a statin this is the quality of his judgement.

    1. KenSchulz

      But is this due to human limitations in processing the available information, or to insufficiency of research and data on risk factors and outcomes?
      The Framingham study has been ongoing over 70 years and three generations. What can an AI do to identify and quantify risk factors faster?

      1. sdimond

        There is useful study: Can machine-learning improve cardiovascular
        risk prediction using routine clinical data? (If you look at it be sure to read the supplementary data.) They compared 4 AI approaches with the standard risk predictor. All 4 did slightly better but the best improvement was only about 3% better. The standard lab panel doesn't really probe the actual risk factors. Everyone has their cholesterol level tested because historically that was one of the first tests developed, Its predictive power approaches zero. On the other hand HgbA1C would be a useful test but it is routinely done only in patients who have already been diagnosed with diabetes and that is in itself a major risk factor.

Comments are closed.