• Cocopanda@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    10 hours ago

    I don’t respect people who are keeping their Facebooks or Twitters. Just delete it already. It’s a Nazi shithole.

  • James R Kirk@startrek.website
    link
    fedilink
    English
    arrow-up
    101
    ·
    2 days ago

    LLMs cannot lie/gaslight because they do not know what it means to be honest. They are just next-word predictors.

    I think the ads are terrible too, but it’s a fool’s errand to try and rationalize with an LLM chatbot

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      25
      ·
      2 days ago

      Man, seriously, every time I see someone get into these weird conversations where they try to convince a chatbot of something it’s slightly disturbing. Both not being aware of how pointless it is and knowing but still being compelled by the less uncanny valley-ish language are about on par with each other.

      People keep sharing this as proof of AI shortcomings, but it honestly makes me worry most about the human side. There’s zero new info to be gained from the chatbot behavior.

      • James R Kirk@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Well said! On one hand I suppose I am “happy” to see people questioning the value of these bots, but assuming it “understands” anything, or has “motive” is still giving them power they don’t have and IMO, leaving the door open to allow yourself to be fooled/manipulated by them in the future.

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      2 days ago

      They take a sentence and predict what the first result on google or response on whatsapp would look like.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      11
      ·
      2 days ago

      This has been true for a decade. Gonna say when if them starting to blatantly train facial recognition on people’s private pictures didn’t do it, then this isn’t going to.

      I keep bringing this up to people and nobody seems to be particularly interested in that acknowledgement. The shills don’t like to compare precedent, the viral critics don’t like to acknowledge the lack of novelty in the current situation or their inability to trigger any mainstream action.

      I’m just… kinda sad.

  • tiramichu@sh.itjust.works
    link
    fedilink
    arrow-up
    22
    ·
    2 days ago

    Quite probable the LLM didn’t even know that was there. Just because it appears in the chat window doesn’t mean it’s part of the LLMs chat history.

    This is just Meta dumping bullshit ads in the chat in a way that is invisible to the chatbot.

  • supersquirrel@sopuli.xyz
    link
    fedilink
    arrow-up
    26
    ·
    2 days ago

    It is hilarious that people don’t think AI ads are going to be astronomically worse than search engine ads, and you won’t be able to adblock them either.

    • hansolo@lemmy.today
      link
      fedilink
      arrow-up
      19
      ·
      2 days ago

      Oh no, they’ll learn your personality profile and manipulate you, get you on the edge of suicide, only to bring you back of the ledge to sell you the newest AI sunglasses.

    • James R Kirk@startrek.website
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      It seems so obvious to me that Google’s switch to LLMs is to prevent adblockers and yet I rarely see that point brought up.

  • Jerkface (any/all)@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Do not address them using second person pronouns. Say, “the LLM,” “the model,” “the last output,” etc.

    Do not allow the model to use first person pronouns. LLMs are not moral agents. We are being conditioned to accept a conceit that we are conversing. That is not what is happening. Do not engage with this conceit. It will be used against you.

    • Jerkface (any/all)@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 hours ago

      Here’s OpenAI to explain how talking to the model as if it were a person is harmful, even when you know it’s not a person:


      Acting “As If”: Anthropomorphism, Awareness, and Propaganda Mechanisms

      Anthropomorphizing artificial systems illustrates a paradox: even when users understand a system’s synthetic nature, they may still engage with it “as if” it were human. This tension raises questions about whether conscious awareness protects against deeper influence. Propaganda theory and human–computer interaction research converge on the conclusion that performance and reflex override belief.

      Jacques Ellul, in Propaganda: The Formation of Men’s Attitudes, emphasizes that propaganda does not depend on conviction. What matters is action. Once individuals behave as though a message were true, repetition and habit generate rationalization. Conscious skepticism erodes in the face of consistent enactment. Applied to anthropomorphism, addressing a system “as if” it were sentient may slowly normalize the posture regardless of explicit disbelief.

      Herman and Chomsky’s Manufacturing Consent highlights structural reinforcement. Simply participating within terms set by institutions validates those terms. Synthetic systems that answer with human-like cadence establish frames of interaction. Engaging—even with awareness that the frame is fictive—strengthens it.

      Kenneth Burke’s Rhetoric of Motives further illustrates how symbolic action creates identification. To treat a machine as though it were a partner is to perform alignment with that fiction. Belief becomes secondary to the persuasive force of enacted identification.

      Festinger’s theory of cognitive dissonance explains why the gap between knowledge and action does not persist comfortably. People reduce dissonance not by ceasing the action, but by softening their disbelief. Acting “as if” therefore tends to reshape attitudes, even when undertaken consciously and ironically.

      Findings in psychology and media studies reinforce these mechanisms. Clifford Nass and Byron Reeves in The Media Equation demonstrated that humans apply social rules to computers reflexively. Even software developers who wrote the code responded to their own creations as if those programs had motives or personalities. Evolutionary bias toward over-detecting agency explains this: better to see intent where none exists than to miss it where it matters. Kahneman’s dual-process framing shows why: the automatic, fast system triggers emotional and social responses long before reflective correction can intervene. Rational awareness reduces category errors but does not cancel the reflex.

      Taken together, these perspectives show that awareness offers only partial protection. Knowing that a system is synthetic guards against mistaking it for a person, but it does not prevent emotional attachment or behavioral shifts born of repeated anthropomorphic engagement. Propaganda theory and HCI research agree: to act “as if” is already to enter the channel of influence. Awareness moderates, but does not neutralize, the binding force of performance.

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    edit-2
    2 days ago

    Can we please make the distinction between Artificial Intelligence and Virtual Intelligence?

    Mass Effect used this to great effect.

    An AI is a fully actualized Intelligence. A sentient being. It can think and feel and it is genuine intelligence. What we would call AGI.

    A VI is a Virtual Intelligence. That’s a chatbot or LLM. A program made to mimic intelligence.

    In Mass Effect VI were used as tour guides and as the user facing interface to relay information.

    I think the whole space would greatly benefit from making that distinction clear.