• heavydust@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      1
      ·
      8 days ago

      Basic safety that should be heavily regulated to prevent medical errors?

      I know we live in the age of JavaScript where we don’t give a fuck about quality anymore but it shouldn’t be encouraged.

    • Enoril@jlai.lu
      link
      fedilink
      English
      arrow-up
      13
      ·
      8 days ago

      Because, even today, you can’t and will never have a 100% reliable answer.

      You need to have at least 2 different validators to reduce the probability of errors. And you can’t just say, let’s run this check twice by AI as they will have the same flaw. You need to check it with a different point of view (being in term of technology or ressource/people).

      This is the principle we apply in aeronautics since decades, and even with these layers of precautions and security, you still have accident.

      ML is like the aircraft industry a century ago, safety rules will be written with the blood of the victims of this technology.

    • zr0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 days ago

      Let’s say we have a group of 10 people. 7 with cancer, 3 without.

      If the AI detects cancer in 6 out of the 7, that’s a success of 86%.

      If the AI detects cancer in 2 of the 3 healthy people, that’s a success of 100%.

      So, operating the healthy ones always leads to a success and AI is trained by success. That’s why a human should look at the scans too, for now.

      • reksas@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 days ago

        for now and always. medicine is something you dont want to entrust to automation.

        • zr0@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 days ago

          Well, theoretically, an organism is nothing but a system running fully automatically. So I can see the possibility to have it fixed by another system. In the meantime, AI should support doctors, by making the invisible visible.