• heavydust@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    3
    ·
    6 days ago

    In the medical industry, AI should stick to “look at this, it may be <ILLNESS> and you must confirm it.” Any program that says “100% outperforms doctors” is bullshit and dangerous.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 days ago

      In the medical industry, AI should stick to “look at this, it may be <ILLNESS> and you must confirm it.”

      Who said that this isn’t the planned use case? The article is reporting on the results of a test, not suggesting that AI can replace doctors.

      Any program that says “100% outperforms doctors” is bullshit and dangerous.

      That’s nonsense.

      A CPU 100% outperforms a Mathematician, a crane 100% outperforms the strongest human and a shovel can dig faster than your hands. Radar, lidar, optics, etc are all technologies that perform well beyond human capabilities.

      Robotic surgery 100% outperforms doctors. Medical imaging 100% outperforms human doctors. Having a model that can interpret the images better than people isn’t at all surprising or dangerous.

      It’s only the fact that you’ve implied that this will replace doctors that make it sound scary. But that implication isn’t supported by facts.

        • mrcleanup@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          5 days ago

          All the previous examples were things operated by humans: shovel, crane, even the robotic surgery.

          I am sure we can teach AI to do some or all of these someday, but demanding an example for one of them as completely autonomous makes it seem like you aren’t paying attention, aren’t participating in the discussion in good faith, and are just fishing for a “gotcha!” moment.

          That’s why you are getting downvotes, in case you are curious.

          If you do have a good faith argument, clarifying it might get people to listen to you and consider it.

          • HubertManne@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            I read it as replace doctors but yeah. I mean even current crappy ai chatbots can increase the productivity of a human. Granted we have thinned our systems in prioritizing efficiency over quality that Im not sure if we will see much of an effect till we have a society were people are relatively satisfied with its functioning.

      • heavydust@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        1
        ·
        5 days ago

        Basic safety that should be heavily regulated to prevent medical errors?

        I know we live in the age of JavaScript where we don’t give a fuck about quality anymore but it shouldn’t be encouraged.

      • Enoril@jlai.lu
        link
        fedilink
        English
        arrow-up
        13
        ·
        5 days ago

        Because, even today, you can’t and will never have a 100% reliable answer.

        You need to have at least 2 different validators to reduce the probability of errors. And you can’t just say, let’s run this check twice by AI as they will have the same flaw. You need to check it with a different point of view (being in term of technology or ressource/people).

        This is the principle we apply in aeronautics since decades, and even with these layers of precautions and security, you still have accident.

        ML is like the aircraft industry a century ago, safety rules will be written with the blood of the victims of this technology.

      • zr0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        Let’s say we have a group of 10 people. 7 with cancer, 3 without.

        If the AI detects cancer in 6 out of the 7, that’s a success of 86%.

        If the AI detects cancer in 2 of the 3 healthy people, that’s a success of 100%.

        So, operating the healthy ones always leads to a success and AI is trained by success. That’s why a human should look at the scans too, for now.

        • reksas@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 days ago

          for now and always. medicine is something you dont want to entrust to automation.

          • zr0@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 days ago

            Well, theoretically, an organism is nothing but a system running fully automatically. So I can see the possibility to have it fixed by another system. In the meantime, AI should support doctors, by making the invisible visible.

  • peoplebeproblems@midwest.social
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    5 days ago

    This is what AI should be used for. Not the generative crap ChatGPT peddles.

    AI is perfect for applications looking at tons of different variables for specific patterns and are capable of being trained on new data cheaper than training every doctor in the country.

    A doctor’s first and primary goal is keeping a patient alive. Second is to normalize quality of life. Third is to minimize suffering when possible.

    There is a HUGE and artificial shortage of doctors and healthcare providers in this country, and largely the world. They honestly don’t have enough time to review every patient record, symptoms, and make a diagnosis and treatment plan, THEN do their continuing education and licensing requirements, AND do any research if they are mandated to do so by their employer, AND if they are at a teaching hospital - teach.

    These AI tools can look at an entire medical record, symptoms, laboratory results, and pathology images and make a very accurate diagnosis that is always run by a physician before making a determination. AI doesn’t forget what it’s learned either.

  • ödd (they/them)@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    6 days ago

    I really wish “AI” would die; machine vision and convolutional neural networks used in this application don’t have much to do with the large language models most people think of with the modern incarnation of the term ai

    • iii@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      don’t have much to do with the large language models

      On a technical level I disagree: they’re only using one convolution layer. The biggest change compared to previous work on the same dataset is the gated MLP, which is an idea that’s inspired by transformers (1), which in their turn created the LLM that are hyped.

      In general, I agree that AI is a useless marketing term.

  • LeFrog@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    4
    ·
    edit-2
    5 days ago

    I am able to identify 100% of cancer: just say “It is cancer” to each picture.

    The article does not mention any other metrics than detection rate. What about recall etc.? Without it, this news is basically worthless.

    I stand corrected, see the comments below. While the article still lacks important context, accuracy is well defined for this topic.

    • iii@mander.xyz
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      5 days ago

      Accuracy in a classification context is defined as (N correct classifications / total classifications). So classifying everything as cancer would, in a balanced dataset, give you ~50% accuracy.

      This article is indeed badly written PR fluff. I linked the paper in a sister comment. Both the confusion matrix and the ROC curve look phenomenal. Train/test/validation split seems fine too, as do the training diagnostics, so I’m optimistic that it isn’t a case of overfitting.

      Ofcourse 3rd party replication would be welcome, and I can’t speak to the medical relevanve of the dataset. But the computer vision side of things seems well executed.

    • stray@pawb.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      with an impressive 99.26% accuracy.

      I feel this would be a blatant lie if it included a bunch of false positives.

      https://mander.xyz/comment/17810389

      While keeping the FPR low, our model keeps the TPR high, showing that it can accurately find real cases while reducing false alarms.

      I’m not educated enough to know what recall means in this context, but there’s tables with percentages for it in the page. (Would love an explanation; I’m not sure what to search for to get the right definition.)

      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        6 days ago

        I’m not educated enough to know what recall means in this context

        This wiki describes the terminology for a binary classification. I always have to refer to that page too, as it’s very confusing :)

  • Match!!@pawb.social
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 days ago

    one of the particularly good uses for AI! in fact it’s so good and cheap that it’d actually be hard to turn a lot of profit on! which… hm…

  • nthavoc@lemmy.today
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    5 days ago

    From the article: Of course, it’s not a tool designed to replace medical professionals but to be used in collaboration with cancer specialists to accurately spot the disease and then monitor how successful treatment has been. What’s more, this kind of model is a much more rapid, accessible and affordable way to diagnose cancers.

    This is the key difference and how AI should be used. It doesn’t replace the human but effectively aids them in their research. The whole “outperfoming doctors” pitch needs to change to “Reducing critical misses for doctors.” Otherwise it gets roped into the ChatGPT-like AI’s which are absolutely garbage for decision making.

  • Wilco@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    5 days ago

    If I state “every living creature that ever existed or will ever exist had, has, or will have cancer” I just diagnosed all the cancer in existence … including cancer thousands of years from now. That is a 100% diagnosis rate.

    But what would be the error rate?