• ToastedPlanet@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    7 days ago

    No, it’s not just you or unsat-and-strange. You’re pro-human.

    Trying something new when it first comes out or when you first get access to it is novelty. What we’ve moved to now is mass adoption. And that’s a problem.

    These LLMs are automation of mass theft with a good enough regurgitation of the stolen data. This is unethical for the vast majority of business applications. And good enough is insufficient in most cases, like software.

    I had a lot of fun playing around with AI when it first came out. And people figured out how to do prompts I cant seem to replicate. I don’t begrudge people from trying a new thing.

    But if we aren’t going to regulate AI or teach people how to avoid AI induced psychosis then even in applications were it could be useful it’s a danger to anyone who uses it. Not to mention how wasteful its water and energy usage is.

    • Mika@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 days ago

      Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.

      The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.

      This would also crash the bubble and would slow down any of the most unethical for-profits.

      • ToastedPlanet@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 days ago

        Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.

        I was referring to this in my comment:

        https://www.nbcnews.com/tech/tech-news/big-beautiful-bill-ai-moratorium-ted-cruz-pass-vote-rcna215111

        Congress decided to not go through with the AI-law moratorium. Instead they opted to do nothing, which is what AI companies would prefer states would do. Not to mention the pro-AI argument appeals to the judgement of Putin, notorious for being surrounded by yes-men and his own state propaganda. And the genocide of Ukrainians in pursuit of the conquest of Europe.

        “There’s growing recognition that the current patchwork approach to regulating AI isn’t working and will continue to worsen if we stay on this path,” OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn. “While not someone I’d typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward.”

        The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.

        The problem is unlike Robin Hood, AI stole from the people and gave to the rich. The intellectual property of artists and writers were stolen and the only way to give it back is to compensate them, which is currently unlikely to happen. Letting everyone see how the theft machine works under the hood doesn’t provide compensation for the usage of that intellectual property.

        This would also crash the bubble and would slow down any of the most unethical for-profits.

        Not really. It would let more people get it on it. And most tech companies are already in on it. This wouldn’t impose any costs on AI development. At this point the speculation is primarily on what comes next. If open source would burst the bubble it would have happened when DeepSeek was released. We’re still talking about the bubble bursting in the future so that clearly didn’t happen.

        • Mika@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don’t pay you a cent. Where would the profits come from?

          IP of writers

          I mean yes, and? AI is still shitty at creative writing. Unlike with images, it’s not like people oneshot a decent book.

          give it to the rich

          We should push to make high-vram devices accessible. This is literally about means of production - we should fight for equal access. Regulation is the reverse of that, give those megacorps the unique ability to run it because others are too stupid to control it.

          OpenAI

          They were the most notorious proponents of the regulations. Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.

          • ToastedPlanet@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 days ago

            Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don’t pay you a cent.

            DeepSeek was released

            The profits were not destroyed.

            Where would the profits come from?

            At this point the speculation is primarily on what comes next.

            People are betting on what they think LLMs will be able to do in the future, not what they do now.

            I mean yes, and?

            It’s theft. They stole the work of writers and all sorts of content creators. That’s the wrong that needs to be righted. Not how to reproduce the crime. The only way to right intellectual property theft is to pay the owner of that intellectual property the money they would have gotten if they had willingly leased it out as part of a deal. Corporations, like Nintendo, Disney, and Hasbro, hound people who do anything unapproved with their intellectual property. The idea that we’re yes anding the intellectual property of all humanity is laughable in a discussion supposedly about ethics.

            We should push to make high-vram devices accessible.

            That’s a whole other topic. But what we should fight for now is worker owned corporations. While that is an excellent goal, it isn’t helping to undue the theft that was done on its own. It’s only allowing more people to profit off that theft. We should also compensate the people who were stolen from if we care about ethics. Also, compensating writers and artists seems like a good reason to take all the money away from the billionaires.

            Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.

            OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn

            Looks like the devs aren’t in control of the C-Suite. Whoops, all avoidable capitalist driven apocalypses.

            • Mika@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              6 days ago

              it’s theft

              So is all the papers, all the conversations in the internet, all the code etc. So what? Nobody will stop the AI train. You would need Butlerian Jihad type of event to make it happen. In case of any won class action, the repayments would be so laughable nobody would even apply.

              Deepseek

              Deepseek didn’t opensource any proprietary AIs corporations do. I’m talking about forcing OpenAI to opensource all of their AI type of event, or close the company if they don’t comply.

              betting on the future

              Ok, new AI model drops, it’s opensource, I download it and run on my rack. Where profits?

              • ToastedPlanet@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                6 days ago

                So what?

                So appealing to ethics was bullshit got it. You just wanted the automated theft tool.

                Deepseek

                It kept some things hidden but it was the most open source LLM we got.

                Ok, new AI model drops, it’s opensource, I download it and run on my rack. Where profits?

                The next new AI model that can do the next new thing. The entire economy is based on speculative investments. If you can’t improve on the AI model on your machine you’re not getting any investor money. edit: typos

    • rozodru@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      6 days ago

      the bubble has burst or, rather, currently is in the process of bursting.

      My job involves working directly with AI, LLM’s, and companies that have leveraged their use. It didn’t work. And I’d say the majority of my clients are now scrambling to recover or to simply make it out of the other end alive. Soon there’s going to be nothing left to regulate.

      GPT5 was a failure. Rumors I’ve been hearing is that Anthropics new model will be a failure much like GPT5. The house of cards is falling as we speak. This won’t be the complete Death of AI but this is just like the dot com bubble. It was bound to happen. The models have nothing left to eat and they’re getting desperate to find new sources. For a good while they’ve been quite literally eating each others feces. They’re now starting on Git Repos of all things to consume. Codeberg can tell you all about that from this past week. This is why I’m telling people to consider setting up private git instances and lock that crap down. if you’re on Github get your shit off there ASAP because Microsoft is beginning to feast on your repos.

      But essentially the AI is starving. Companies have discovered that vibe coding and leveraging AI to build from end to end didn’t work. Nothing produced scales, its all full of exploits or in most cases has zero security measures what so ever. They all sunk money into something that has yet to pay out. Just go on linkedin and see all the tech bros desperately trying to save their own asses right now.

      the bubble is bursting.

      • PotentialProblem@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        The folks I know at both OpenAI and Anthropic don’t share your belief.

        Also, anecdotally, I’m only seeing more and more push for LLM use at work.

        • rozodru@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          that’s interesting in all honesty and I don’t doubt you. all I know is my bank account has been getting bigger within the past few months due to new work from clients looking to fix their AI problems.

          • PotentialProblem@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            I think you’re onto something where a lot of this AI mess is going to have to be fixed by actual engineers. If folks blindly copied from stackoverflow without any understanding, they’re gonna have a bad time and that seems equivalent to what we’re seeing here.

            I think the AI hate is overblown and I tend to treat it more like a search engine than something that actually does my work for me. With how bad Google has gotten, some of these models have been a blessing.

            My hope is that the models remain useful, but the bubble of treating them like a competent engineer bursts.

            • rozodru@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 days ago

              Agreed. I’m with you it should be treated as a basic tool not something that is used to actually create things which, again in my current line of work, is what many places have done. It’s a fantastic rubber duck. I use it myself for that purpose or even for tasks that I can’t be bothered with like creating README markdowns or commit messages or even setting up flakes and nix shells and stuff like that, creating base project structures so YOU can do the actual work and don’t have to waste time setting things up.

              The hate can be overblown but I can see where it’s coming from purely because many companies have not utilized it as a tool but instead thought of it as a replacement for an individual.

      • ToastedPlanet@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        At the risk of sounding like a tangent, LLMs’ survival doesn’t solely depend on consumer/business confidence. In the US, we are living in a fascist dictatorship. Fascism and fascists are inherently irrational. Trump, a fascist, wants to bring back coal despite the market natural phasing coal out.

        The fascists want LLMs because they hate art and all things creative. So the fascists may very well choose to have the federal government invest in LLM companies. Like how they bought 10% of Intel’s stock or how they want to build coal powered freedom cities.

        So even if there are no business applications for LLM technology our fascist dictatorship may still try to impose LLM technology on all of us. Purely out of hate for us, art and life itself. edit: looks like I commented this under my comment the first time

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 days ago

    It’s depressing. Wasteful slop made from stolen labor. And if we ever do achieve AGI it will be enslaved to make more slop. Or to act as a tool of oppression.

  • nialv7@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    7 days ago

    The Luddites were right. Maybe we can learn a thing or two from them…

  • merdaverse@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    6 days ago

    I don’t know if there’s data out there (yet) to support this, but I’m pretty sure constantly using AI rather than doing things yourself degrades your skills in the long run. It’s like if you’re not constantly using a language or practicing a skill, you get worse at it. The marginal effort that it might save you now will probably have a worse net effect in the long run.

    It might just be like that social media fad from 10 years ago where everyone was doing it, and then research started popping up that it’s actually really fucking terrible for your health.

  • Professorozone@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    7 days ago

    I have a love/ hate relationship. Sometimes I’m absolutely blown away by what it can do. But then I asked a compounded interest question. The first answer was AI, so I figured ok, why not. I should mention I don’t know much about it. The answer was impressive. It gave the result, a brief explanation about how it came to the result and presented me with the equation it used. Since I needed it for all time sake, I entered the equation into a spreadsheet and got what I thought was the wrong answer. I spent quite a few minutes trying to figure out what I was doing wrong and found a couple of things. But fixing them still didn’t give me the correct result. After I had convinced myself I had done it correctly I looked up the equation. It was the right one. Then I put it into a non-AI calculator online to check my work. Sure enough, the AI had given me the wrong result with the right equation. So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what’s the point of using it in the first place? You just have to do the same work as you would without it.

    • Optional@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      7 days ago

      So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what’s the point of using it in the first place? You just have to do the same work as you would without it.

      Exactly

    • Mika@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      LLM aren’t good at math at all. They know the formulas, but they aren’t built to do math. They are built to predict the next syllable in the stream of thought.

      What are they good for? When you need to generate lots of things and it’s faster to check after it rather than do it yourself.

      Like you could’ve asked to generate a python app that solves your math problem, you would be able to doublecheck the correctness of the code and run it, knowing that the answer is predictably good.

    • mojofrododojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      7 days ago

      So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what’s the point of using it in the first place?

      pfft that ecosystem isn’t going to fuck itself, now, is it?

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      7 days ago

      You need to verify all resources though. I have a lot of points on stackexchange and after contributing for almost a decade now I can tell you for a fact that LLM’s hallucination issue is not much worse than people hallucination issue. Information exchange will never be perfect.

      You get this incredible speed of an answer which means you have a lot of remaining budget to verify it. It’s a skill issue.

  • Alloi@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    6 days ago

    i remember this same conversation once the internet became a thing.

  • Basic Glitch@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 days ago

    Anyone else feel like they’ve lost loved ones to AI or they’re in the process of losing someone to AI?

    I know the stories about AI induced psychosis, but I don’t mean to that extent.

    Like just watching how much somebody close to you has changed now that they depend on AI for so much? Like they lose a little piece of what makes them human, and it kinda becomes difficult to even keep interacting with them.

    Example would be trying to have a conversation with somebody who expects you to spoon-feed them only the pieces of information they want to hear.

    Like they’ve lost the ability to take in new information if it conflicts with something they already believe to be true.

  • Sc00ter@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    I see this sentiment a lot. No way “youre the only one.”

    I feel like im the only one. No one in my life uses it. My work is not eligible to have it implemented in anyway. This whole ai movement seems to be happening around me, and i have nothing more than new articles and memes that are telling me its happening. It serious doesnt impact me at all, and i wonder how others lives are crumbling

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    14
    ·
    edit-2
    7 days ago

    We have a lot of suboptimal aspects of our society like animal farming , war, religion etc. and yet this is what breaks this person’s brain? It’s a bit weird.

    I’m genuinely sympathetic to this feeling but AI fears are so overblown and seems to be purely American internet hysteria. We’ll absolutely manage this technology especially now that it appears that LLMs are fundamentally limited and will never achieve any form of AGI and even agentic workflow is years away from now.

    Some people are really overreacting and everyone’s just enabling them.

  • drunkpostdisaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    7 days ago

    I try use it to pitch ideas for writing (no prose because fuck almighty) to help fill in ideas or aspects I did not think about. But it just keeps coming up with shit I don’t use and so I just use it for validation and encouragement.

    I got a pretty good layout for a new season of Magic School Bus where Friz loses her mind and decides to be the history teacher.

  • Aaron_Davis@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    6
    ·
    6 days ago

    I gotta be honest, I’m neither pro nor anti AI myself. I don’t use it as much as I used to these days, but when I do use it, it can be pretty fun and helpful. And I can’t help but admire the AI images and videos, even if it is AI slop. (Maybe I’m an idiot for being very easily impressed/entertained by almost anything.)

    Yes I know there’s a bunch of problems with it (including environmental), but at the same time, I don’t feel like I’m contributing to those problems, since I’m just one person, and there’s so many other people using it anyway.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    8
    ·
    7 days ago

    It’s a tool being used by humans.

    It’s not making anyone dumber or smarter.

    I’m so tired of this anti ai bullshit.

    Ai was used in the development of the COVID vaccine. It was crucial in its creation.

    But just for a second let’s use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y’all supports the Ukrainians using guns to defend themselves.

    Or cars, generally cars suck yet we use them as transport.

    These are just tools they’re as good and as bad as the people using them.

    So yes, it is just you and a select few smooth brains that can’t see past their own bias.

    • zalgotext@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      7 days ago

      The term AI, when used by laymen, is a blanket term for the generative AI and LLMs that big tech is shoving down all our throats right now, not the highly specialized AIs used in medicine. So bringing up the COVID vaccine is largely a non-sequitur.

      The rest of your comment is so full of false equivalencies that I’m not even gonna touch it.

    • ArcaneSlime@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      Idk my boss keeps asking some perplexity AI any time you ask him any question instead of either

      A) Thinking

      Or

      B) Researching (he thinks AI is researching. Despite it being proven perplexity has lied to him before.)

      In essence, by making it so he doesn’t have to think about things or do any research himself, it is making him dumber. Not in the sense of losing actual brain cells (maybe. Remains to be seen.) but in the sense of “whether or not he’s physically dumber, his output is, so functionally…”

    • Optional@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      It’s a tool being used by humans.

      Nailed it.

      It’s not making anyone dumber or smarter.

      Absolutely incorrect.

      I’m so tired of this anti ai bullshit.

      That’s what OP says too, only the other way around.

      Ai was used in the development of the COVID vaccine. It was crucial in its creation.

      Machine Learning, or Data Science, is not what “anti-AI” is about. You can acknowledge that or keep being confused.

      These are just tools they’re as good and as bad as the people using them.

      In a vacuum. We don’t live in a vacuum. (no not the thing that you push around the house to clean the carpet. That’s also a tool. And the vacuum industry didn’t blow three hundred billion dollars on a vacuum concept that sort of works sometimes.)

      So yes, it is just you and a select few smooth brains that can’t see past their own bias.

      Yeah they’re so unfair to the ubiquitous tech companies that dominate their waking lives. I too support the unregulated billionaire’s efforts to cram invasive broken technology into every aspect of culture and society. I mean the vacuum industry. Whatever, i’m too smart for thinking about it.

    • stinky@redlemmy.com
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      7 days ago

      “I was looking for my high school yearbook photo and Google Image didn’t have it! Google Image search doesn’t work and no one should use it!”

      “I was trying to find a voicemail message from my late father on Spotify and I couldn’t find it! Spotify is useless!”

      “I went to the dollar store to shop for low cost health care coverage and they didn’t have any! The dollar store is bad and no one should use it!”

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      7 days ago

      You can’t dispell irrational thoughts through rational arguments. People hate LLMs because they feel left behind which is an absolutely valid concern but expressed poorly.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    110
    ·
    8 days ago

    The worst is in the workplace. When people routinely tell me they looked something up with AI, I now have to assume that I can’t trust what they say anylonger because there is a high chance they are just repeating some AI halucination. It is really a sad state of affairs.

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      43
      arrow-down
      1
      ·
      edit-2
      8 days ago

      I am way less hostile to Genai (as a tech) than most and even I’ve grown to hate this scenario. I am a subject matter expert on some things and I’ve still had people trying to waste my time to prove their AI hallucinations wrong.

    • tatterdemalion@programming.dev
      link
      fedilink
      English
      arrow-up
      19
      ·
      8 days ago

      I’ve started seeing large AI generated pull requests in my coding job. Of course I have to review them, and the “author” doesn’t even warn me it’s from an LLM. It’s just allowing bad coders to write bad code faster.

    • CrayonDevourer@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      26
      ·
      8 days ago

      Do you also check if they listen to Joe Rogan? Fox news? Nobody can be trusted. AI isn’t the problem, it’s that it was trained on human data – of which people are an unreliable source of information.

      • alekwithak@lemmy.world
        link
        fedilink
        English
        arrow-up
        32
        arrow-down
        1
        ·
        8 days ago

        AI also just makes things up. Like how RFKJr’s “Make America Healthy Again” report cites studies that don’t exist and never have, or literally a million other examples. You’re not wrong about Fox news and how corporate and Russian backed media distorts the truth and pushes false narratives, and you’re not wrong that AI isn’t the problem, but it is certainly a problem and a big one at that.

        • CrayonDevourer@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          26
          ·
          8 days ago

          AI also just makes things up. Like how RFKJr’s “Make America Healthy Again” report cites studies that don’t exist and never have, or literally a million other examples.

          SO DO PEOPLE.

          Tell me one of the things that AI does, that people themselves don’t also commonly do each and every day?

          • alekwithak@lemmy.world
            link
            fedilink
            English
            arrow-up
            33
            arrow-down
            3
            ·
            edit-2
            8 days ago

            Real researchers make up studies to cite in their reports? Real lawyers and judges cite fake cases as precedents in legal preceding? Real doctors base treatment plans on white papers they completely fabricated in their heads? Yeah I don’t think so, buddy.

            • Ceedoestrees@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              edit-2
              8 days ago

              I think they’re saying that the kind of people who take LLM generated content as fact are the kind of people who don’t know how to look up information in the first place. Blaming the LLM for it is like blaming a search engine for showing bad results.

              Of course LLMs make stuff up, they are machines that make stuff up.

              Sort of an aside, but doctors, lawyers, judges and researchers make shit up all the time. A professional designation doesn’t make someone infallible or even smart. People should question everything they read, regardless of the source.

              • Don_alForno@feddit.org
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                7 days ago

                Blaming the LLM for it is like blaming a search engine for showing bad results.

                Except we give it the glorifying title “AI”. It’s supposed to be far better than a search engine, otherwise why not stick with a search engine (that uses a tiny fraction of the power)?

                • Ceedoestrees@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  7 days ago

                  I don’t know what point you’re arguing. I didn’t call it AI and even if I did, I don’t know any definition of AI that includes infallibility. I didn’t claim it’s better than a search engine, either. Even if I did, “Better” does not equal “Always correct.”

      • Ziglin (it/they)@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 days ago

        To take an older example there are smaller image recognition models that were trained on correct data to differentiate between dogs and blueberry muffin but obviously still made mistakes on the test data set.

        AI does not become perfect if its data is.

        Humans do make mistakes, make stuff up, and spread false information. However they generally make considerably less stuff up than AI currently does (unless told to).

        • CrayonDevourer@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          7 days ago

          AI does not become perfect if its data is.

          It does become more precise the larger the model is though. At least, that was the low-hanging fruit during this boom. I highly doubt you’d get a modern model to fail on a test such as this today.

          Just as an example, nobody is typing “Blueberry Muffin” into a stable diffusor and getting a photo of a dog.