Skip to content

In new study, AI finds 20% more breast cancers than human radiologists

A recent study in Sweden randomly assigned breast cancer screenings to either an AI or a human radiologist. Guess which one did better?

There were 244 screen-detected cancers...in the intervention group and 203 cancers...in the control group. Compared with the control group, the screen-reading workload of radiologists was reduced by 44.3% using AI-assisted techniques.

That's a 20.2% improvement in cancer detection with only half as much work from radiologists. In blunter terms, it means that if AI were used for everyone a whole bunch of radiologists could be let go—and those remaining would get paid a lot less.

I don't imagine this will be allowed to happen without a fight. If AI improves and further testing results are positive, hospitals will be eager to use it. The American Board of Radiology might be less thrilled.

26 thoughts on “In new study, AI finds 20% more breast cancers than human radiologists

  1. Scurra

    It's not AI. You know that. It's pattern matching. It's very good pattern matching, nobody is denying that. But it's just pattern matching at speed.

    I disagree with the conclusion of paragraph 2. I mean, it's completely correct in the capitalist sense, because the spreadsheet decision makers don't care about anything other than the bottom line, but it should be incorrect in the broader sense, because it ought to mean that radiologists would be able to do more interesting and useful stuff instead of merely pattern matching at speed (which is what I guess people would tend to think is all that radiologists do.)

    1. J. Frank Parnell

      Human beings are notoriously bad at repetitive boring tasks, and reviewing mammograms falls into this category. Computers, on the other hand, can be great at repetitive boring tasks.

    2. peterlorre

      This is correct, and this observation is not even particularly new- "AI"-driven histological diagnosis has been a thing for about a decade, and the life of pathologists has changed a lot from spending 90% of their time looking at slides in a microscope to other less repetitive and error-prone stuff.

      1. Coby Beck

        I could agree with "pattern matching is AI" but not the reverse. AI is neural nets, pattern matching, knowledge bases, natural language parsing, machine learning, heuristic search algorithms and more.

  2. birdbrain

    No.

    It isn't "either an AI or a human radiologist." It's either a human radiologist or a human radiologist working in a workflow that includes an AI assistant.

    This fact is clear even just in the abstract, which is obviously all Kevin read.

    The reduction in workload is a big deal, but isn't going to lead to a bunch of radiologists getting laid off.

    The increase in true positive rate is almost certainly good, but it's complicated by a whole lot of things that the paper is careful to caveat (severity, subtype, hard-to-measure false negative rates), and that Kevin was not.

    Also keep in mind that breast cancer is one of the most heavily researched cancers in the literature, so these are likely the among the better (and easiest to assess) results currently obtainable.

  3. DrPath

    The paper describes AI-assisted screening. This is not a new thing. Computer screening of slides or radiographs for anomalies, which are then identified by a physician, have been in use since the nineties. These went somewhat out of fashion because they were both slower and more error prone than just having the material scanned by the MD de novo. The paper asserts that this is no longer the case.

  4. Austin

    “In blunter terms, it means that if AI were used for everyone a whole bunch of radiologists could be let go…”

    Only if those radiologists do nothing else but screen for breast cancer. I don’t know about California, but here in Virginia radiologists screen for lots of other problems besides breast cancer. And I assume those other problems still need screening.

    1. Austin

      And yes, before someone says “yeah but Kevin’s point is that AI can screen for everything,” no evidence has been submitted proving it’s better than humans at actually screening for the hundreds of problems radiologists screen for now. Just one study says AI screens better for one problem (breast cancer) than humans. Seems premature to conclude that radiologists can be mass fired immediately.

  5. Pittsburgh Mike

    Well, I'm not paying for the article, but I'll note that the abstract doesn't mention the false negative or false positive rate. If you don't care about the latter, I can write something that catches 100% of all cancers: just always claim there's a cancer present.

    That being said, I don't doubt that an AI would be a good second set of eyes looking for tumors. But given the large # of hallucinations in some AI applications, I definitely wouldn't trust one today to be the sole reader of any x-rays.

  6. D_Ohrk_E1

    Seems to me, this allows radiologists to speed up their reviews while still charging as much as they do right now.

    Remember, most of radiology is subcontracted, even if facilities are located within a hospital. Once you get that scan, it's in the hands of that subcontractor.

  7. D_Ohrk_E1

    Speaking of following the science in your posts regarding the origins of SARS-CoV-2, do you follow the science on COVID transmission in schools?

    https://mstdn.social/@erictopol/110833277043922812

    Relevant to schools starting up soon is a new report on K-12 2° #SARSCoV2 transmission. It is low (~2-3%), but significantly reduced w/masks (by 88%), vaccination (by 96%), increased 2.5 fold in classroom vs out-of-classroom

    1. Special Newb

      Yeah schools should be the first thing opened and last thing closed. School levels are largely a reflection of community spread so a non-sociopathic society would close indoor bars and clubs before schools.

  8. DudePlayingDudeDisguisedAsAnotherDude

    First of all, there's not such thing as AI. It's a hype. Having said that, where AI could be useful would be in medical diagnosing, and mot only pattern matching as is the case here. Ability to examine multiple data sets and find their intersections would be a powerful tool.

  9. cephalopod

    It would be interesting to know the rate of false positives, and to know if there are any patterns in either the false positives or false negatives. The variety of breast cancers are not evenly distributed across populations, after all.

    But I'm also curious about drift. AI systems have shown themselves to become worse at some tasks over time. If we move a lot of processes to AI, how often will their accuracy be checked?

  10. zic

    Detecting more breast cancer is not necessarily a good thing. We already know that it can lead to over-treatment of cancers that would otherwise never be a problem.

    But it's important to understand that detecting more breast cancer does not necessarily lead to better medical outcomes for patients.

  11. Adam Strange

    A few years ago, I worked with a startup which was developing a camera to detect melanomas (skin cancer).
    The camera took a picture of the dark area of the skin, and looked for randomness in the perimeter, because cancer (the Crab) spreads randomly.

    At the time, the most accomplished doctors who were examining patients and trying to determine if a dark spot was a benign mole or a deadly skin cancer somehow trained themselves to recognize melanoma in most cases. However, the software in the startup's camera had a much, much better track record (more real positives, fewer false positives) than human doctors had in identifying melanomas.

    It wasn't AI, because AI wasn't needed. It was just pattern recognition. Humans are not great at seeing true randomness, while machines can be made to do so quite well.

    Now, has this product displaced any doctors, or "allowed them to go on and do more interesting work"? Not to my knowledge. The AMA is an incredibly strong union, and it has successfully defended doctor jobs and doctor wages for a long time, using various bullshit arguments and gateway strategies, while the Teamsters have not.

  12. Chondrite23

    Depending on the data set people have developed great mathematical tools to find targets among the noise. There will still be false positives that need to be confirmed by humans using other tests.

    In various fields we now collect vast amounts of data. Far too much data for individuals to inspect even if you had enough trained experts. The only way to process this is with mathematical tools. Even so, you still need training to interpret the results of the machine screening.

  13. skeptonomist

    As I have often said, medical practice is an area where automation could be used much more. People go to medical school for years to cram the information into their heads, but at this point there's really too much of it to absorb and process. There are too few physicians so they don't have time to keep up with the latest developments. The profession, through the AMA, one of the largest lobbying groups, uses various tactics to keep the number down. The government could spend a lot more money to train physicians, as other countries do, but it could also support the development of automated diagnostics. The free market seems to be finding that developing chatbots for mostly entertainment purposes is likely to lead to higher profits.

  14. Justin

    YouTube premium monthly cost increased by $2 per month. I guess that AI isn’t reducing costs yet. The Enshittification of the internet continues.

    Meanwhile, the AI takeover has begun… reconnaissance attacks probing for weaknesses.

    https://www.cbsnews.com/miami/news/customers-report-missing-deposits-from-wells-fargo-bank-accounts/

    “Wells Fargo is dealing with a technical issue that has resulted in customers reporting that their direct deposits had disappeared from their bank accounts.”

    Technical issue. Hilarious. Fake accounts then theft. 😂

  15. azumbrunn

    One set of data would interest me: What is the variation in getting it right among radiologists? Are there those that are just as good (or, God forbid, better...) than AI and those that flunk al lot? Or all humans about equal in performance? I think somewhat different conclusions would follow from those two possibilities.

    Also: Is the better performance of AI due to being better at the job than humans or to the fact that AI is always in superform while humans are sometimes tired or have a hangover?

    Also: How many false positives for AI vs. humans?

  16. dilbert dogbert

    I don't know if cancer radiographs are read in India but maybe a decade ago I read that a lot of radiographs were read in India.

  17. shapeofsociety

    The decision to use this tool will rest with the hospital administrators, not the radiologists. The tool will be used. There will be no fight.

  18. Goosedat

    US radiology has already been outsourced to India using telemed tech. Is the AI being developed to outperform radiologists also being outsourced to Indian developers?

  19. MikeDotNet

    That headline!!?? You mean AI finds that there are more breast cancer screenings than there are radiologists? Oh, and I'm on team "this isn't AI". I'd call it machine learning, but I can live with pattern matching.

Comments are closed.