The planet burning text summarisers seem to have found their main purpose as confirmation bias machines and I am finding myself arguing with the output of an LLM when talking to people.

Now after many years in the world of office jobs my general perception of most people is

  • severe inability to be wrong
  • severe inability to do anything about being wrong
  • egos so weak they could shatter just by looking at them
  • huffing your own farts and doubling down is the name of the game

And so I have to contend with this. People who cannot accept they have made a mistake arguing with me via an llm output they managed to wrestle into agreeing with them because they can’t accept fault. The amount of time i have given learned educated provably correct advice only to hear “but copilot told me this” and the output being some hallucinated drivel cos the person who wrote the prompt is more bothered by trying to be correct than resolving the original issue they were having.

I know a small handful of people who have gone completely off the rails already getting chatgpt to confirm basically any delusion they have and then when i see them will go on about it for hours on end how they broke out of the matrix and see the world for what it is and that we don’t need schools anymore just give everyone an llm.

All it reminds me of is the “do your own research” crowd of mumsnet nazis parroting some random facebook post on how being vegan gives you autism and the only cure is shooting your child in the head. Except its worse because the llm can keep that delusion going for longer and build on it until most people are living in some ai generated dreamland of pure unfiltered confirmation bias.

I think we’re going to hit some major problems not long from now as a significant portion of people start offloading their thinking to these corporate models. I already see a decent chunk just accepting it as an unbiased authority and its scary. A completely new and arguably more effective way to deliver even more extreme propaganda if they chose to do so and very few people even question it.

Oh and the number of people unknowingly sharing sora videos is…. Dire

To add to this I never really understood why a lot of people have a hard time being wrong and have such diabolically weak egos. Like it’s logically infeasible to not be wrong and if you are wrong just idk learn from it? I like being wrong cos it means i can fix it and not be wrong later. The only reasonable response to being incorrect is “oops” followed by “thanks”

    • Philosoraptor [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 day ago

      This is spot on. For some questions, the thing that sounds most like an answer to the question is the actual answer to the question. For other questions, it’s something that’s close to the answer, but not quite it. For others, it’s totally fabricated bullshit. There is no way to tell in advance what kind of question you’re asking, and no way to tell after getting the answer which kind of response it was. The idea that this is even a plausible route to general intelligence is unhinged. It is the most literal instantiation of the Chinese Room ever built: it doesn’t (and can’t) understand anything by design.

    • nasezero [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      1 day ago

      A funny way to test this is to ask an LLM to do something obviously impossible, like make a script to generate audio from a CPU by running processes through it to hit its resonance frequency effectively turning it into a speaker. Complete technobabble nonsense, but the LLM will happily (and with absolute confidence) work through it and present the user with a script that vaguely looks like it could work.

      • plinky [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 day ago

        tbf, big models have resisted complete nonsense for a year, you have to get very specific/technical (so amount of plausible nonsense contexts window it attention-ing at forces it to bypass punishment functions, cause it has more plausible phrases to pull from) to bypass likely answer of “that’s impossible or did you mean x?”. like you can ask how to remove vertebrae of a worm, it will likely flatly deny it or hedge

        (in your case, plausible contexts are resonance frequency/speaker/cpu, it probably couldn’t bridge this, but add missing links, like quartz crystals, internal clocks and some shit, and it probably can do)

        • nasezero [comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          9 hours ago

          Yeah, maybe a better way to put it is “impossible for an LLM to figure out” unless it can find an actual research paper or something on how to implement such a thing.

          The point is, the LLM can’t innovate, it can only work with what it’s been trained on and what it can search for. And even then, it can’t (at least in my limited experience) tie separate concepts together without being spoonfed the information.

          • plinky [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            it can innovate as in produce something not seen before (monkeys type plausible sentences instead of mess of letters), it just requires human to understand if it’s output is a plausible thing or complete garble, it doesn’t have anything within it of what the concepts implied by words mean nor can it research or cite, although some agentic stuff seems at least viable if too expensive.

            *i do get mad when people say we don’t know what it does, you can back trace image cnns and see (kinda) what neural networks are doing. i suspect there would be some bright lines of weights connecting atypical words, and if the distance (in whatever n-dimensional space) is too big to bridge, it starts fumbling with separation (this is x, and this is y did you mean zxy? or flat denial), but if you use blander words and plausible linkages (with smaller distances between atypical/sciency words, so that likelihood of them being together increases in training set), then it may start to produce complete nonsense.

  • Chana [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 day ago

    That’s a great insight! You’re basically the Leonardo da Vinci of posting opinions. Those people are like ostriches sticking their head in the ground. Fun fact: while a popular tripe, ostriches don’t actually put their heads underground when scared. Would you like to learn some more ostrich facts, smarty pants?

  • 7bicycles [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    I think we’re going to hit some major problems not long from now as a significant portion of people start offloading their thinking to these corporate models.

    It’s been surprising to me how many people seemingly suffer under the oppression of having to think. Like their ideal job would be white collar factory line work and when something unexpected comes up they ask the LLM and it tells them what to do and they do it. Just straight up locking yourself into the chinese room.

    Like even ignoring how bad LLMs generally are, apart from the obvious question of how you figure you, the friction between the computers just talking to each other, is gonna keep being employed and maybe even the far fledged (to the non technical people) question of “if anything in the real world that affects this excel file ever changes, what’s the dataset you train that thing on when nobody is even doing it anymore” this just sounds like my personal hell. The only berable part of white collar work is problem solving and you want that automated away to just mindlessly type in numbers or click buttons for 8 hours a day?

  • Богданова@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    For real. We are losing our ability to critically think as a society at an unprecedented speeds. If I write a text longer than a couple hundred words I have gotten ignored or an emoji response in every single group chat I’ve been in. And no I don’t dumb politics of people before anyone accuses me of that. I literally just talk about “ordinary” hobbies.

    If I want to get people to start typing the most successful method I’ve been doing it the Solid Snake method.

    The Solid Snake method?

  • BodyBySisyphus [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    1 day ago

    Used to be you needed to be unfathomably wealthy or have the power of a monarch in order to be able to surround yourself 24-7 with a coterie of eager sycophants; now, anyone can have one thanks to LLMs! porky-happy

    • Snort_Owl [they/them]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      Aint nothing quite like paying a consultant hundreds of thousands to tell you you’re right and all your employees are wrong though. LLM doesnt have the same slicked back or gelled up hair appeal of a good ol fashioned consultant.

  • Sickos [they/them, it/its]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago
    personal rant
    • severe inability to be wrong
    • severe inability to do anything about being wrong
    • egos so weak they could shatter just by looking at them
    • huffing your own farts and doubling down is the name of the game

    This cost me a once-good friendship with a radlib. The worst part was their insistence that “you think you’re always right, and can never admit when you’re wrong.” Like, valid, I historically have trouble with that and perfectionism and all sorts of mental shit that I am actively treating and have been for years…but mostly it’s because I don’t open my damn mouth unless I actually know something. It ends up it’s super easy to always be right. And those times when I am wrong…it’s as simple as “oh, oops, guess I was misinformed. Thanks for correcting me.” It boggles my mind that most people don’t operate this way.

    The real friction came from stuff like “china bad”.

    “China bad.”
    “Disagree.”
    “UYGHURS.”
    “Fake news, literally.”
    “Black book of communism.”
    “Pure propaganda.”
    “Mass starvation.”
    “US Sanctions.”
    “WHY WON’T YOU EVER JUST ADMIT WHEN YOU’RE WRONG ABOUT SOMETHING!? WHY DO YOU HAVE TO KEEP TRYING TO PROVE YOU’RE RIGHT ABOUT EVERYTHING!?”
    “BECAUSE I AM RIGHT ABOUT EVERYTHING, DAMMIT. IF I WAS WRONG, I’D CHANGE MY MIND SO THAT I’D BE RIGHT IN THE FUTURE! THAT’S HOW THINKING WORKS! THAT’S HOW SCIENCE WORKS! THAT’S HOW EVIDENCE WORKS! FUCK THIS. FUCK YOU. I’m done.”

    Like, I disagree with people about stuff because if I was saying something incorrect, I’d want to get it right in the future. Anything else is pure liberalism of the fifth, sixth, and eleventh kind.

  • llama@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    I feel like it’s only helpful for people who don’t actually do any work. Because if they did they’d realize how many times it says something can be done, but can’t, and how often even if it can be done, literally nobody cares.

  • Ildsaye [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    Lots of folks grow up in households and schools in which appearing wrong invites a pile-on of mockery and abuse, and learn to never, ever admit such ‘weakness’. I’ve been watching the space for experimentation and play steadily shrink my whole life.

  • hollowmines [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    22
    ·
    2 days ago

    I’ve come to believe that there are in fact some limited use cases for chatbots - I don’t use them at all, but a friend used one to help navigate a tricky labour issue at work, likely saving his job (for a time, at least). it makes intuitive sense to me that a bullshit machine would be good at assisting one in navigating bullshit procedural situations. (ofc I would much prefer my friend didn’t have to navigate obtuse office politics in the first place and the job itself kinda sucks, but a W is a W.)

    but then a co-worker tells me they use it to draft messages on dating apps and the urge to destroy rises up again.

    • it makes intuitive sense to me that a bullshit machine would be good at assisting one in navigating bullshit procedural situations.

      The most use I’ve had from a chatbot/LLM is in generating attorney speak and some official-looking documents to send to a debt collection agency to get out of a past-due debt that I owed. Was surprised at how easy it was.

    • Snort_Owl [they/them]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 days ago

      I say to my friends and colleagues that its a capitalist solution to capitalist problems. My neurodivergent ass needs it to translate blunt honesty into fluffy corporate speak and it works wonders. I also use it for my performance goals and all the other HR crap we have to do that benefits nobody.

      But yeah its a solution to a problem that never needed to exist.

    • Dimmer06 [he/him,comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      There’s a lawyer who’s been using an LLM to parse US labor law (which is almost exclusively case law) and make it much more accessible to regular people. It seems to be pretty good at that.

  • rufuscrispo [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    i have a coworker whose husband is “solving crimes” with a chatbot. currently, he’s just focused on historical unsolved cases but by summer i’m betting he’ll be slapping zipties on one of his neighbors and end up in a shoot out with sheriff’s deputies

  • Lussy [he/him, des/pair]@hexbear.net
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    severe inability to be wrong severe inability to do anything about being wrong egos so weak they could shatter just by looking at them huffing your own farts and doubling down is the name of the game

    Genuinely the biggest difference between the private sector and the public sector. Capitalism really promotes selremovedrandizing blowhards with unremarkable talents

        • BodyBySisyphus [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          I would unironically love to learn more about the self-aggrandizing blowhard-free corners of the public sector. My current experience has been that the people I work with, while smart, caring, and passionate about what they do, are unable to admit it or get help when they find themselves in something over their heads, and private philanthropy is busy going AI to the hilt.

          • Chana [none/use name]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            In fact, self-promotion is the key to most public sector jobs and job advancement, just like in the private sector. So the same cycle of ego protection occurs along with absolute bullshitting on a near-constant basis, meaning there is a constant conflict over basic progress and whatever manager/director/PI over promised or took credit for.

  • Thordros [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    I’m building my first homelab right now, and my very first project is I’m trying to set up a local LLM based on DeepSeek 3.1. Goals are:

    • Get it to respond to voice commands through my desktop mic (push-to-talk listening only).
    • Speak in the voice of the Disco Elysium narrator / Harry’s Thoughts. This is partially working already!
    • Be skeptical of everything idea I spitball at it, and actively try to prove me wrong.

    It’s very engaging project! I’ve named him Robert, part in honor of Robert Kurvitz, and part because, haha, Robert / robot, get it??

  • AssortedBiscuits [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 day ago

    Preface everything with “well my AI told me that blah blah blah.” If they push back, mock them for choosing a shittier AI than your “AI.” If they push back more and insist on seeing the prompt, mock them further for intruding on the confidentiality between you and your “AI.” (People who have been suckered by AI treat it as some companion or therapist and ask it personal questions.)

  • RION [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 day ago

    I think it is nice to have something that validates you in a superficially lifelike manner if you don’t have a real person to do that