• paequ2@lemmy.today
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 days ago
    • Three months before AI was introduced, the adenoma detection rate (ADR) was around 28%.
    • Three months after AI was introduced, the rate dropped to 22% when clinicians were unassisted by AI.
    • The study found that AI did help endoscopists with detection when used, but once the assistance was removed, clinicians were worse at detection.

    What a strange place to be. Detection went up with AI, which is good. But, now you’re at the mercy of the AI companies, hoping they don’t double the price, and if you can’t pay, you end up worse than where you started.

    Also, this quote stood out to me.

    “Often, we expect there to be a human overseeing all AI decision-making but if the human experts are putting less effort into their own decisions as a result of introducing AI systems this could be problematic.”

    YES. I see this all the time. My coworkers tend to rubberstamp the AI generated code. Which makes sense. If they were too lazy to think through a problem—why would they suddenly be meticulous in fact-checking AI slop?

  • takeda@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    17
    ·
    3 days ago

    It will have even more severe effects.

    One of the best ways to learn critical thinking is wiring essays in class about different subjects. The reason for it is that, you can’t just say that you support something because you feel like it, you have to back it up with some evidence.

    With people using ChatGPT to write essays, our society becomes dumber. And this at the time when we need critical thinking more than ever.

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      Oh, I’m positive they’ve long since stopped teaching that. At least in poor districts. My youngest sister, Gen Z, was an honors student that literally wasn’t taught anything about how to write an essay, not even your basic 5 paragraph ‘in this essay I will’ essay. My Zillennial middle sister also struggled with essays, but she at least had the idea and just wasn’t great at it.

      For comparison I went through the same school system before No Child Left Behind went into effect, and we spent what I remember as at least a few full instruction weeks in English class in high school, if not more, of doing nothing but essays because ‘you’ll need it for college.’ Along with regular essay homework assignments.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 days ago

    I saw this same article posted over on r/ChatGPT and every single top comment is people saying “so what, why does the doctor need to be skilled if the AI can do it” 🤦‍♂️

    • AnarchistArtificer@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      A podcast I listened to recently spoke about failure modes of AI. They used an example of a toll bridge in Denmark where it was impassable recently because it only took card payments, and the payment processing system was down. It would be sensible in this scenario for the failure mode to be for the toll barrier to be open and for them to just let cars through if technical problems means it’s impossible for people to pay the toll. Unfortunately, this wasn’t the case, and no-one had the ability to manually make the barrier go up. Apparently they ended up having to dismantle the barrier while the payment system was down.

      This is very silly, and highlights one of the big dangers of how AI systems are currently being used (even though this particular problem doesn’t have AI involved, I don’t think, just regular tech problems). The point is that tech can be awesome at empowering us, but we need to think about “okay, but what happens when things go wrong?”, and we need to be asking that question in a manner that puts humans at the centre.

      That was a far more trivial scenario than the situation described in the article. If AI tools help improve detection rates, then that’s awesome. But we need to actually address what happens if those technologies cease to be available (whether because the tools rely on proprietary models, or power outages, or countless other ways that this could go wrong)

      • skisnow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I suspect the whole problem could be avoided with some judicious UX to force the doctors to make and log their estimations first.

      • tarknassus@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        “Is there a doctor in the house?”

        “Yes. Let me load up ChatGPT. Ah damn, no signal. Sorry guys.”

        • Catoblepas@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          That would never happen! When have you ever been to a doctor’s appointment where the computer and network didn’t work instantly and seamlessly?! 🤪