• Stepos Venzny@beehaw.org
    link
    fedilink
    English
    arrow-up
    39
    ·
    2 days ago

    The whole article is fixated on Grok being far right and never seems to care that an LLM is citing another LLM instead of an actual source.

    • Janx@piefed.social
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      I disagree. The focus of the article is misinformation. It’s literally the first sentence:

      The latest model of ChatGPT has begun to cite Elon Musk’s Grokipedia as a source on a wide range of queries, including on Iranian conglomerates and Holocaust deniers, raising concerns about misinformation on the platform.

      Yes, Grokipedia is right-wing. It was literally created to alter reality and spread lies that agree with their worldview! But the real problem is it can’t be edited with sourced, fact-based information, instead AI generates everything. I think the article did explore the fact that it’s one LLM depending on another…

      • Stepos Venzny@beehaw.org
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 days ago

        If there was ever a difference between being far-right and being disinformation, there isn’t one anymore.

      • tangentism@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        12 hours ago

        Weeks rather than months or years!

        Most SM users read like bots so we’re almost at critical mass of dead internet

  • Lembot_0006@programming.dev
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    2 days ago

    LLMs are good for searching in technical documentation. And that’s it. They are barely usable outside this niche.

    Stop using it for “humanitarian” purposes. It can’t be a psychologist, lawyer or anything else similar.

    • tangentism@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      12 hours ago

      They’re good for pattern matching on smear tests, etc and have a detection rate much better than humans (+90% to~75%)