• technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    6 days ago

    It’s not reliable. The name itself is misleading. The “evidence” is apparently already open. The article doesn’t seem to say whether the statistical model is open. My guess would be no.

    More accurate name: ClosedSummary

  • liv@beehaw.orgOP
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    7 days ago

    Article about an AI that aims to give treatment suggestions to doctors, with some alarming results.

    • liv@lemmy.nz
      link
      fedilink
      arrow-up
      3
      ·
      7 days ago

      When we look at passing scores, is there any way to quantitatively grade them for magnitude?

      Not all bad advice is created equal.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        5 days ago

        The grading is a mess. It goes about qualitative, quantitative… and statistical corrections “to make it fair”.

        Anyway, there is ~30% margin on the scores for passing, so chances are that 9% is better than the worst doctor who still “passed”.

        • liv@lemmy.nz
          link
          fedilink
          arrow-up
          1
          ·
          5 days ago

          I’d hope the bar for medical advice is higher than “better than the worst doctor”.

          Will be interesting to see where liability lies with this one. In the example given, following the advice could permanently worsen patients.

          Given that the advice is proven to be wrong and goes against official medical guidance for doctors, that could potentially be material for a class action lawsuit.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            5 days ago

            It’s like in the joke: “What do you call someone who barely finished medical school?.. Doctor.”

            Every doctor is allowed to provide medical advice, even those who should better shut up. Liabilities are like what a friend got after a botched operation, when confronting her doctor: “Sue me, that’s what my insurance is for”.

            I’d like to see the actual final assessment of an AI on these tests, but if it’s just “9% vs 15% error rate”, I’d take it.

            My guess would be the AI might not be great at all kinds of assessments, but having a panel of specialized AIs, like we now have multiple specialist cooperating, sounds like a reasonable idea. Having a transcript of such meeting analyzed by a GP, could be even better.

            • liv@lemmy.nz
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              4 days ago

              I take your point. The version I heard of that joke is “the person who graduated at the bottom of their class in med school”.

              Still, at the moment we can try to avoid those doctors. I’m concerned about the popularizing and replication of bad advice beyond that.

              The problem here is this tool is being marketed to GPs, not patients, so you wouldn’t necessarily know where the opinion is coming from.

              • jarfil@beehaw.org
                link
                fedilink
                arrow-up
                2
                ·
                4 days ago

                There is no realistic way of avoiding those doctors. I’ve been to a GP who, after looking at my medical history and the meds I was taking after a heart attack… slid me a business card for her homeopathic healing practice. 🙄

                Still, I’d hope a majority of doctors would be more likely to be able to parse through an AI’s advice, and take it into consideration, but not blindly depend on it, when giving their own advice.

                Targeting it at GPs makes sense, since they’re supposed to “know of everything”, but no person is capable of doing that, definitely not of staying up to date on everything. Specialists have a narrower area of knowledge to keep up with, but could also benefit from some AI advice based on latest research.

                • liv@lemmy.nz
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  3 days ago

                  I think I’m just going to have to agree to disagree.

                  AI getting a diagnosis wrong is one thing.

                  AI being bulit in such a way that it hands out destructive advice human scientists already know is wrong, like vaccines cause autism, homeopathy, etc, is a malevolent and irresponsible use of tech imo.

                  To me, it’s like watching a civilization downgrading it’s own scientific progress.