paywall bypass: https://archive.is/whVMI

the study the article is about: https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract

article text:

AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study

By Harry Black

August 12, 2025 at 10:30 PM UTC

Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.

AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.

Health-care systems around the world are embracing AI with a view to boosting patient outcomes and productivity. Just this year, the UK government announced £11 million ($14.8 million) in funding for a new trial to test how AI can help catch breast cancer earlier.

The AI in the study probably prompted doctors to become over-reliant on its recommendations, “leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,” the scientists said in the paper.

They surveyed four endoscopy centers in Poland and compared detection success rates three months before AI implementation and three months after. Some colonoscopies were performed with AI and some without, at random. The results were published in The Lancet Gastroenterology and Hepatology journal.

Yuichi Mori, a researcher at the University of Oslo and one of the scientists involved, predicted that the effects of de-skilling will “probably be higher” as AI becomes more powerful.

What’s more, the 19 doctors in the study were highly experienced, having performed more than 2,000 colonoscopies each. The effect on trainees or novices might be starker, said Omer Ahmad, a consultant gastroenterologist at University College Hospital London.

“Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy,” Ahmad, who wasn’t involved in the research, wrote a comment alongside the article.

A study conducted by MIT this year raised similar concerns after finding that using OpenAI’s ChatGPT to write essays led to less brain engagement and cognitive activity.

  • RogueBanana@piefed.zip
    link
    fedilink
    English
    arrow-up
    30
    ·
    1 day ago

    Also very apparent in IT. Juniors blindly generating garbage and coming to me when the shit they blindly create doesn’t work. Got to drill them with questions to make them actually learn something. Concerning that the same is happening in medical even for the experts.

    • mindbleach@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      10
      ·
      1 day ago

      It sounds like this is about when they stopped using AI.

      If they do better with it than without it, why optimize how good they are without it? Like, I know how to do math, by hand. But I also own a calculator. If the speed and accuracy of my multiplication is life-and-death for worried families, maybe I should use the calculator.

      • Baggie@lemmy.zip
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        1 day ago

        If your use a calculator, and it gives you back a number that can’t possibly be right, you know there’s an error somewhere along the line.

        If you’ve never done multiplication before, you won’t have that innate sense of what looks right or wrong.

          • Baggie@lemmy.zip
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            1
            ·
            1 day ago

            It’s an analogy. It’s referring to the original comment where people don’t have the skills to recognise how or why something doesn’t work. The core problem is without that fundamental understanding of what you’re trying to do, you don’t know why something doesn’t work.

            • mindbleach@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              22 hours ago

              No shit, it’s my analogy. And I made clear - the underlying skill still exists.

              These doctors can still spot cancer. They’re just rusty at eyeballing it, after several months using a tool that’s better than their eyeballs.

              X-rays probably made doctors worse at detecting tumors by feeling around for lumps. Do you want them to fixate on that skill in particular? Or would you prefer medical care that uses modern technology?

              • Baggie@lemmy.zip
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                21 hours ago

                Well you need to work on your communication skills as much as you do on your tone then.

                You clearly are more focused on being argumentative and obtuse rather than engaging the argument that skills need to be developed, before you assign all the work to a machine that automates the process, but errors can and will occur.

                Enjoy spending all your life entering every discussion predisposed to anger and argument, I’ve got better things to do with my time.

                • mindbleach@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  20 hours ago

                  Tone policing, followed by essentialist insults. Zero self-awareness.

                  Meanwhile, I’ve repeatedly pointed out: these doctors have the skills. The machine only helps. You can’t or won’t engage with that.

      • RogueBanana@piefed.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 day ago

        No, this is about me trying to fix their buggy ai code that they have no idea how it works and what it isn’t working. If you can do your work completely on your own without issues then whatever but if you are breaking stuff and come to me asking for help cause you don’t know how your own code works then that’s a massive problem. I don’t mind teaching people, I actually enjoy it but that’s only when you are putting in effort to learn it instead of copy pasting code from copilot.

        • mindbleach@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          21 hours ago

          Okay cool, that’s not what’s happening here.

          These aren’t “vibe doctors.” They’re trained oncologists and radiologists. They have the skill to do this without the new tool, but if they don’t practice it, that skill gets worse. Surprise.

          For comparison: can you code without a compiler? Are you practiced? It used to be fundamental. There must be e-mails lamenting that students rely on this newfangled high-level language called C. Those kids’ programs were surely slower… and ten times easier to write and debug. At some point, relying on a technology becomes much smarter than demonstrating you don’t need it.

          If doctors using this tool detect cancer more reliably, they’re better doctors. You would not pick someone old-fashioned to feel around and reckon about your lump, even if they were the best in the world at discerning tumors by feel. You’d get an MRI. And you’d want it looked-at by whatever process has the best detection rates. Human eyeballs might be in second place.

          • RogueBanana@piefed.zip
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            20 hours ago

            I never implied they are vibe doctors? Its just a comment on my annoying experience, don’t read to much into it.

            • mindbleach@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              19 hours ago

              “Concerning that the same is happening in medical even for the experts.”

              It isn’t.

              Glad we cleared that up?

                • mindbleach@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 hours ago

                  No. You’re making a faulty comparison. The thing in this article is exclusively for experts. Using it made them better doctors, but when they stopped using it, they were out-of-practice at the old way. Like any skill you stop exercising. Especially at an expert level. Your junior programmers incompetently trusting LLMs is not the same problem in any direction.

                  This is genuinely important, because people are developing prejudice against an entire branch of computer science. This stupid headline pretends AI made cancer detection worse. Cancer’s kind of a big deal! Disguising the fact that detection rates improved with this tool, by fixating on how they got worse without it, may cost lives.

                  A lot of people in this thread are theatrically advocating the importance of deep understanding of complex subjects, and then giving a kneejerk “fuckin’ AI, am I right?”

      • If you’re doing it once, then that’s fine. But if you have to do it loads of times, and things keep getting more complex, you’ll find that you won’t be able to correctly use the tools anymore and spot its mistakes.

        AI raises your skill level a bit, but also stumps your growth if used irresponsibly. And that growth may be necessary later on, especially if you’re a junior in the field still.

        • mindbleach@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          19 hours ago

          Should urologists still train to detect diabetes by taste? We wouldn’t want the complexity of modern medicine to stunt their growth. These quacks can’t sniff piss with nearly the accuracy of Victorian doctors.

          When a tool gets good enough, not using it is irresponsible. Sawing lumber by hand is a waste of time. Farmers today can’t use scythes worth a damn. Programming in assembly is frivolous.

          At what point do we stop practicing without the tool? How big can the difference be, and still be totally optional? It’s not like these doctors lost or lacked the fundamentals. They’re just rusty at doing things the old way. If the new way is simply better, good, that’s progress.

          • It’s true that if a tool is objectively better, then it makes little sense to not use it.

            But LLMs aren’t that good yet. There’s a reason senior developers are complaining about vibecoding juniors; their code quality is often just bad. And when pressed, they often can’t justify why their code is a certain way.

            As long as experienced developers are able to do proper code review, the quality control is maintained. But a vibecoding developer isn’t good at reviewing. And code review is an absolutely essential skill to have.

            I see this at my company too. There’s a handful of junior devs that have managed to be fairly productive with LLMs. And to the LLMs credit, the code is better than it was without it. But when I do code review on their stuff and ask them to explain something, I often get a nonsensical, AI-generated response. And that is a problem. These devs also don’t do a lot of code review, if any, and when they do they often have very minor comments or none at all. Some just don’t do any reviews, stating they’re not confident approving code (which is honest, but also problematic of course).

            I don’t mind a junior dev, or any dev for that matter, using an LLM as an assistant. I do mind an LLM masquerading as a developer, using a junior dev as a meat puppet, if you get what I mean.

            • mindbleach@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              16 hours ago

              We’re not talking about LLMs.

              These doctors didn’t ask ChatGPT “does this look like cancer.” We’re talking about domain-specific medical tools.

      • subignition@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        24 hours ago

        Because “AI” tools are unsustainable, and it would be better not to have destroyed your actual skill when the bubble eventually pops.

        • mindbleach@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          23 hours ago

          This is not that kind of AI. It’s not an LLM trained on WebMD. You cannot reason about this domain-specific medical tool, based on your experience with ChatGPT.