• jarfil@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    5 days ago

    It’s like in the joke: “What do you call someone who barely finished medical school?.. Doctor.”

    Every doctor is allowed to provide medical advice, even those who should better shut up. Liabilities are like what a friend got after a botched operation, when confronting her doctor: “Sue me, that’s what my insurance is for”.

    I’d like to see the actual final assessment of an AI on these tests, but if it’s just “9% vs 15% error rate”, I’d take it.

    My guess would be the AI might not be great at all kinds of assessments, but having a panel of specialized AIs, like we now have multiple specialist cooperating, sounds like a reasonable idea. Having a transcript of such meeting analyzed by a GP, could be even better.

    • liv@lemmy.nz
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      4 days ago

      I take your point. The version I heard of that joke is “the person who graduated at the bottom of their class in med school”.

      Still, at the moment we can try to avoid those doctors. I’m concerned about the popularizing and replication of bad advice beyond that.

      The problem here is this tool is being marketed to GPs, not patients, so you wouldn’t necessarily know where the opinion is coming from.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        There is no realistic way of avoiding those doctors. I’ve been to a GP who, after looking at my medical history and the meds I was taking after a heart attack… slid me a business card for her homeopathic healing practice. 🙄

        Still, I’d hope a majority of doctors would be more likely to be able to parse through an AI’s advice, and take it into consideration, but not blindly depend on it, when giving their own advice.

        Targeting it at GPs makes sense, since they’re supposed to “know of everything”, but no person is capable of doing that, definitely not of staying up to date on everything. Specialists have a narrower area of knowledge to keep up with, but could also benefit from some AI advice based on latest research.

        • liv@lemmy.nz
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          3 days ago

          I think I’m just going to have to agree to disagree.

          AI getting a diagnosis wrong is one thing.

          AI being bulit in such a way that it hands out destructive advice human scientists already know is wrong, like vaccines cause autism, homeopathy, etc, is a malevolent and irresponsible use of tech imo.

          To me, it’s like watching a civilization downgrading it’s own scientific progress.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            3 days ago

            Is AI handing out destructive advice to medical professionals, though?

            It seems to me like it’s still working as a summarizing service, taking in vast amounts of information sources that no human would be able to process in a lifetime, and handing out recommendations about which paths a doctor might want to pursue further.

            We live in a world where information generation has long ago vastly surpassed anyone’s ability to grasp it all, long gone are the days of polymaths like daVinci, or even Euler. International communication has outgrown human ability around the 18th century, and we’ve gone multiple orders of magnitude farther in the Internet age.

            Just like Google was barely enough to search for information, we’re now at the point where AI summaries are barely enough to surface data that would otherwise remain hidden.

            I agree that these summarizing services need oversight to avoid malevolent and irresponsible uses or manipulations, and I think recent EU AI legislation is on the right track to tackle that.

            The systems will require improvements and refinements over time, but that’s kind of expected.