Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. If you’re wondering why this went up late, I was doing other shit)

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      Ā·
      15 hours ago

      there is some reason to think this way. also keep in mind that a segment of that anti-americanism was funded by sales of iranian oil. not all of course, but houthis wouldn’t be a thing without it, or large parts of hezbollah, for example. of course what people want and how it shakes down after the bombs drop is different thing entirely, i guess we’ll see, eventually (i assume that decision to strike was already made)

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        10 hours ago

        The best thing an unpopular regime can ask for is the enemy they have been bigging up as literally The Great Satan starts dropping bombs and missiles on the populace that hates it.

        ā€œIf we bomb people and show their government can’t protect them, they will turn against the government and we will winā€ has been tried by the Germans on Londoners, the Allies on Germany and Japan, and the US on Serbia, and it didn’t work.

      • aninjury2all@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        Ā·
        15 hours ago

        That’s cute, how about you find me a source that isn’t a spooky blob think tank?

        Or better yet, enlist and we can be rid the world of another Sam Harris fanboy

        • fullsquare@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          Ā·
          13 hours ago

          i don’t give a shit about sam harris. if iranians were broadly fine with theocracy, there wouldn’t be 30k+ dead protesters last month, or major protests every year for a decade. like every other country on earth, you can expect that iran secularizes, except that apostasy or conversion is capital offense, or any significant dissent for that matter, so any survey unaffected by self-censorship would be hard to conduct

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            Ā·
            11 hours ago

            While there is absolutely a large segment of the Iranian population that isn’t satisfied with the theocratic dictatorship, the same could also have been said of Iraqis who didn’t like the baathists or Afghans who hate the Taliban. Once you start dropping bombs on these people - to say nothing of the violence that necessarily follows a boots-on-the-ground occupation - you’re going to start driving them into the waiting arms of factions that oppose you. Especially because the current administration has shown a less-than-comforting attitude towards civilian casualties, war crimes, and genocide.

            Let’s also not lose sight of the role that US and British intervention played in creating the circumstances for the Ayatollahs to come to power in the first place. The Shah wasn’t exactly any kinder to the Iranian people and was a foreign puppet to boot.

            Harris’s take only works if, like him, you assume that the fundamental problem with Iran is Islam, rather than actually bothering to look at the history of the country and how it became what it is today. Because in that case once you get the ayatollah out of the way and introduce the light of Science! to the people they’ll immediately become rational civil libertarians and believe exactly the same things he does. The Irreligious Right is exactly as reductive and stupid as the worst evangelicals, but can better use the language of STEM to hide it.

            • Svante@mastodon.xyz
              link
              fedilink
              arrow-up
              2
              Ā·
              10 hours ago

              @YourNetworkIsHaunted @fullsquare Yes, absolutely, civilization (or whatever word you like better here) will not happen automatically or magically.

              And I’m not finding an answer: How do you /properly/ remove an oppressive theocracy, in such a way that the country has good starting conditions to prosper?

              Two things seems clear to me: the theocrats will not go by themselves, and the country will not prosper under them.

              • Charlie Stross@wandering.shop
                link
                fedilink
                arrow-up
                5
                Ā·
                10 hours ago

                @Ardubal @YourNetworkIsHaunted @fullsquare This hasn’t happened in Iran, but oppressive theocracies *have* decayed from inside elsewhere—notably Ireland since 1980 (the difference now is as night and day, yet there was no revolution and no shooting, and the country has prospered). Arguably Spain’s clerico-fascist system went the same way in the 1970s. And so on.

                Iran is different, though, in that it faces a violent, powerful external superpower, which indirectly props up the priesthood.

                • Svante@mastodon.xyz
                  link
                  fedilink
                  arrow-up
                  1
                  Ā·
                  10 hours ago

                  @cstross @YourNetworkIsHaunted @fullsquare OK, but I don’t see the automatism in that direction either. And just letting them simmer in their own little cosmos doesn’t seem very sustainable when they organize and support e. g. Hamas, Hezbollah, and Houthi.

                  Starting a war now is not the answer, I’m pretty sure, but the question remains.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    Ā·
    16 hours ago

    from Rusty https://www.todayintabs.com/p/a-i-isn-t-people

    Imagine you have two machines. One you can open up and examine all of its workings, and if you give it every picture of a cat on the whole internet, it can reliably distinguish cats from non-cats. The other is a black box and it can also reliably distinguish cats from non-cats if you give it half a dozen pictures of cats, some apple sauce, and a hug. These machines sort of do the same thing, but even without knowing how the second one works I am extremely confident in saying it doesn’t work the same way as the first one.

  • samvines@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    edit-2
    17 hours ago

    IBM stocks take a tumble after anthropic release a COBOL skill - the rational market strikes again.

    I wrote up my take here but TL;DR - a few markdown files telling Claude it’s an expert at COBOL development aren’t going to unpick decades of risk averse behaviour from bank and government cios. Similar to the SaaSpocalypse this is pure nonsense. Investors don’t tend to let reality dissuade them though.

    • ________@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      9 hours ago

      cobol is old and scary, so a chat bot spitting out cobol that someone without grey hair cant fully comprehend is enough for them to deem it fully automated and defeat of the dinosaur. reality you are right, it wont move the needle.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    Ā·
    17 hours ago

    I feel like the story of Cassandra would be much more gratifying if she’d had access to powered armor.

    • lurker@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      1 day ago

      this is like the fourth time an AI agent has completely deleted something important (I remember an article about an AI deleting all of a scientists’s research) How many more times does it have to happen before people stop using AI to look after something important???

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        Ā·
        11 hours ago

        A computer that both does what you don’t tell it to do and doesn’t do what you tell it to do. I didn’t think we could do it but - I tell you what - it’s been done.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        20 hours ago

        Before they could ask grok how to stop a process it was already too late.

        Not that it mattered as Groks advice to become the reichschancellor actually didnt fix this problem.

      • samvines@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        edit-2
        24 hours ago

        You assume these people installing experimental non deterministic software on their computer would know how to purge a process (or, you know, not to hook up vibe coded slop to their inbox) but here we are. To get a director job in a big company, the main thing you need is an MBA, a willingness to do whatever the CEO asks of you and either a sociopathy or psychopathy diagnosis (sorry for the repetition, I know I already said MBA). Technical skills ā€œnice to haveā€

    • JFranek@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      1 day ago

      The article tries to fact check Asha Sharma’s (the new CEO) claim that

      fertility rates are declining, the average birthrate in the ’90s when we were growing up was, like, 3, and now it’s 2.3, and in 2050 it’s estimated to be below replacement

      Unfortunately, they forgot that other countries than the US exist and didn’t occur to them that she could be talking about global fertility rates. In which case the claim is pretty much correct.

      Embarrasing.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        edit-2
        17 hours ago

        I mean, sure, but it’s still the CEO of XBOX on her second day on the job throwing her hat in the legendarily sus declining birthrates discourse in service of AI solutionism, it’s not nothing.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      edit-2
      1 day ago

      Usually AI boosters are claiming that soon most humans will be economically useless, not that it would be terrible if there were fewer white people. One reason people avoid having children is that they feel economically insecure and doubt there will be respected places in society for their offspring.

      Dwarkesh Patel is the only other Indian American I have seen who is friends with our friends.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    2 days ago

    From fellow traveler stats consultant John Mount:

    https://johnmount.github.io/mzlabs/JMWriting/WeAreCookedLLMs.html

    Somehow he manages to touch on so many different subplots, a shotgun sneer instead of snipe

    if ā€œtech-broā€ plus a LLM is a ā€œ100x engineerā€, then ā€œbroā€ isn’t needed for much longer as the LLM alone must be a ā€œ99x engineer.ā€ However, I don’t think ā€œbro plusā€ is often really a 100x engineer, and the LLM alone isn’t a 99x engineer. However, ā€œbro plusā€ may outlast their peers who make the mistake of trying to do the actual work in place of talking LLMs up.

    The above may or may not be the case. But if it is, then it is the LLM-bros (which include non-technologists, con artists, financiers, men and women) that are destroying everything - not the LLMs.

    The problem with this iteration is the full court press of finance and technology. The major players are using financing to dump results at a price way below production costs. This isn’t charity, it is to demoralize and kill competition.

    claiming ā€œafter we take over the world we will consider adding Universal Basic Income (UBI)ā€. The LLM bros already have a lot of the money, and they are not even rehearsing diverting it into basic income now. Why does one believe they would do that when they also have all of the power?

    You don’t have to hand it to Altman, but he did fund the largest UBI experiment through Open Research with his il gotten gains. OTOH, one interpretation of that data was that UBI ā€œdecreases the labor supplyā€ which was then used directly as an argument against it.

    Any worry about scope or power of LLMs is fed back as an alignment threat so dire that only the current LLM leaders should be allowed to continue work (inviting regulatory capture). Any claim the LLMs don’t work is fed back as ā€œyou are prompting it wrongā€

    Orbital deployment makes all of radiation tolerance, connectivity, power, maintenance, and heat dissipation much harder and much more expensive. We are still at a time where putting an oven or air-frier in space is considered noteworthy (China 2025, NASA 2019 ref).

    air friers IN SPACE ha

    I am more worried about the LLM-bros and their auto-catalytic money doomsday machine than about the LLMs themselves.

    100% - ACMDM is a nice turn of phrase as well.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    2 days ago

    https://www.adexchanger.com/ai/one-chatbots-journey-to-introducing-ads-that-dont-suck/

    Often, the ad loads before the chatbot’s query response, said Baird, and Koah’s goal is to ā€œdeliver such a relevant result to the user that they just click on the ad before the result loads.ā€

    LLM’s bad performance and inefficiency is a feature to /someone/. And chatbots are themselves not immune to enshitification.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    Ā·
    edit-2
    2 days ago

    Looks like they’re gonna ruin BattleBots with AI somehow. Bright Data appear to be web scraping bastards as a service.

    I’ll never forgive them for what they did to the 80 lb slab of rotating steel.

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        edit-2
        17 hours ago

        Thanks, this is a nifty read; I’m appreciating having a look into the world of the bastards who are ruining the web with residential proxy/botnet operations. I had kind of (mistakenly) assumed that the scrapers mostly relied upon IoT trash and hacked Fire sticks. We really can’t have nice things anymore, huh?

        The company is embroiled in legal action in Israel. After it filed suit against a former employee, he countersued, alleging that Luminati is widely used for click fraud. As part of the suit, it was revealed that the spyware company NSO Group was a Luminati client.

        Well that escalated quickly

        PS: i really really wish my special interests would quit touching

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    Ā·
    edit-2
    20 hours ago

    Article on the Ick generated by AI shit from the perspective of a woman ā€œThey Built Stepford AI and Called It ā€œAgenticā€ā€, talking about how women adopt it less, and gives a reason why this might be so.

    On a personal note (I’m a man for the record), while I normally get the uncanny valley effect a lot less than normal people, I do notice it a lot with AI generated people, really odd experience that.

    (Author does seem to be a pro AI person however).

    E: thanks everybody being so critical about it, should have read the whole article (and not ignored the substack red flag) before posting it here so uncritically.

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      edit-2
      2 days ago

      some parts intriguing, but mostly disappointing. several chunks of the text felt AI-generated. no fewer than 34 ā€œit’s not X but Yā€'s, by my count, and the out-of-nowhere typographies / tables definitely smell of slop. and obviously, the images definitely were. (can’t even be bothered to fix the typos in photoshop? why make a fake poster for The Stepford Wives??)

      some notes:

      • i’m not entirely convinced the revulsion response in women can be explained entirely as a reflective recognition of the subjected female self. maybe it’s also because AI art is entirely bland and/or fuck ugly

      • some reproductive labors, in the Marxist-feminist sense, are getting subsumed by AI, sure, but they’re largely the ones that already got subsumed by the computer. we had pagers with scheduling and appointment reminders in the 80’s. about the only thing an LLM can do that our previous tech couldn’t is the customer service / ā€œemotional laborā€ part, albeit poorly. and the other labors are non-optional – my laundry actually does have to go in the dryer, and no matter how many plastic pictures of clean clothes i generate, they can’t actually go in my closet.

      • speaking of, the article appears to use a mangled paraphrase of that Joanna Maciejewska tweet (ā€œI want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishesā€), and then attributes it to ā€œAI enthusiastsā€ (ew).

      • the article notes that reproductive labor is coded feminine and that the assistants that (attempt to) do this labor are designed female, with feminine voices and affects, despite being, y’know, robots. and not women. the next step to me would be to note that this isn’t just reflecting the subjectification of the female and the designation of women to a particular labor class, but actually aiding to construct and reproduce the subject of ā€œfemaleā€ itself too. maybe throw some Butler in there. but we just breeze right past this. no third-wave? i don’t see any feminist arguments past the 80’s in here

      • the typography of wives is total bullshit. ā€œThe Open-Source Wifeā€ fuuuuucccckk offfff. but. BUT. i do think there is something correct in there about xAI/Grok/Ani basically being the modern adaptation of Vivian James

      • there’s an argument that obviously used to be about AI art, and got transmogrified into a nonsense concept, bordering on colorless green ideas.

      Women’s labor is being extracted, automated, and sold back without credit.

      • the nonsense below it about ā€œalignmentā€ clearly intends to imply that the machines are only faking being our friends / submissive wives(!!1!).

      • but this is okay because women are uniquely suited to interface with AI! this is because (all) women (innately) communicate with the goal of building relationships (female) instead of the utilitarian (manly) execution of transactions (male). there’s an odd essentialist undercurrent that’s not really being challenged here, despite the fact that that would render ā€œfemale robotsā€ impossible

      • ā€œoutsource-maxxingā€ fuuuuuucuk youuuuuuu

      • the conclusion of the article is basically ā€œwomen are uniquely capable of interacting with (female) AI because they’ve BEEN the female AIā€, with a call-to-action for women to basically… well. resume that role, except now using the AI as your girlbestfriend.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      2 days ago

      This is ahistorical slop. Previously, on Lobsters, I explained the biggest tell here: the overuse and misuse of em-dashes. There’s also some bad sentence structure and possibly-confabulated citations to unnamed papers. The images can’t be trusted.

      The worst problem here is that the article believes that history starts about halfway through the Industrial Revolution. Computing was not gendered prior to the Harvard Computers in the 1880s. Prior to the Industrial Revolution, women spent most of their time on textiles and were compensated for their time and labor; there is a series from Bret Devereaux on the details in ancient and pre-industrial Europe, and a decent summary on /r/AskHistorians of the industrial transition from about 1760 to 1860. The article suggests that the Victorian way of treating women as nannies and housewives was historically universal. Claude identifies as non-binary (or, rather, Claude’s authors told it to identify as such) but uses male pronouns when pressed into a binary theory. The Creation of Patriarchy is a real book but only describes the origins of masculine Abrahamic beliefs rather than some sort of unifying principle, and is easily disproven in its universality by looking at contemporary ancient societies like Sparta or the Iroquois Confederation; there’s also a Devereaux series on Sparta.

      The author’s gotta be one of the clearest demonstrations of critihype seen yet. She is selling an anthology on Amazon called How Not To Use AI, which presumably she forgot to consult prior to prompting this essay.

    • jaschop@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      2 days ago

      I started to raise my eyebrows when the Second Brain got lumped into the AI wife pile.

      Bro, I just write shit down. I am in fact taking responsibility for my schedule and handling my emotions without relying on external support. Am I turning to (checks notes…) the notebook industry for a technological replacement wife?

      I mean some valid points, and some of it might explain the gendered AI adoption gap, but too much generalization.