They’re arguing with a fucking language model

  • blame [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 days ago

    LLMs don’t have any sort of logical core to them really… At least not in the sense that humans do. The causality doesn’t matter as much as the structure of the response, if I’m describing this right. Like a response that sounds right and a response that is right are the same thing, the LLM doesn’t differentiate. So I think what the grok team must have done is added some system prompts or trained the model in such a way that it is strongly instructed to weigh its responses favoring things like news articles and wikipedia and whatever else over what the user is telling it or asking it.

    • Bloobish [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Ah so it’s more or less biased to what acceptable media it can consume and so is likely at best centrist within it’s perspective given they likely blacklist certain sources. So what is stopping Grok from doing the hallucinatory or fabricated responses that were a big issue with other LLMs

      • blame [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I’m just guessing but likely they are training or instructing it in such a way that it will defer to sources that it finds through searching the internet. I guess the first thing it does when you ask a question is it searches the internet for recent news articles and other sources and now you have the context full of “facts” that it will stick to. Other LLMs haven’t really done that by default (although now I think they are doing that more) so they would just give answers purely on their weights which is basically the entire internet compressed down to 150 GB or whatever.