• jarfil@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    4 days ago

    There is no realistic way of avoiding those doctors. I’ve been to a GP who, after looking at my medical history and the meds I was taking after a heart attack… slid me a business card for her homeopathic healing practice. 🙄

    Still, I’d hope a majority of doctors would be more likely to be able to parse through an AI’s advice, and take it into consideration, but not blindly depend on it, when giving their own advice.

    Targeting it at GPs makes sense, since they’re supposed to “know of everything”, but no person is capable of doing that, definitely not of staying up to date on everything. Specialists have a narrower area of knowledge to keep up with, but could also benefit from some AI advice based on latest research.

    • liv@lemmy.nz
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 days ago

      I think I’m just going to have to agree to disagree.

      AI getting a diagnosis wrong is one thing.

      AI being bulit in such a way that it hands out destructive advice human scientists already know is wrong, like vaccines cause autism, homeopathy, etc, is a malevolent and irresponsible use of tech imo.

      To me, it’s like watching a civilization downgrading it’s own scientific progress.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        3 days ago

        Is AI handing out destructive advice to medical professionals, though?

        It seems to me like it’s still working as a summarizing service, taking in vast amounts of information sources that no human would be able to process in a lifetime, and handing out recommendations about which paths a doctor might want to pursue further.

        We live in a world where information generation has long ago vastly surpassed anyone’s ability to grasp it all, long gone are the days of polymaths like daVinci, or even Euler. International communication has outgrown human ability around the 18th century, and we’ve gone multiple orders of magnitude farther in the Internet age.

        Just like Google was barely enough to search for information, we’re now at the point where AI summaries are barely enough to surface data that would otherwise remain hidden.

        I agree that these summarizing services need oversight to avoid malevolent and irresponsible uses or manipulations, and I think recent EU AI legislation is on the right track to tackle that.

        The systems will require improvements and refinements over time, but that’s kind of expected.