Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentineās Day!)
AI Jobs Apocalypse is Here | UnHerd h/t naked capitalism
feels a bit critihype, idk
So, what happens to American politics when the script is flipped, and we enter a new era of white-collar precarity? We can look back to the recent past and recall that, after the 2008 recession, it was young men who got especially angry. Downwardly mobile urban millennials drifted toward radical Left-wing politics, including the Occupy Wall Street movement and both Sanders campaigns, myself included. In the current decade, the Gen-Z men shut out by elite institutions often join their grandfathers and turn toward MAGA, or worse, into Groypers. But an AI-driven white-collar apocalypse has no equivalent of the American Rescue Plan around the corner, and it will move faster through institutions because the people experiencing it ā journalists, lawyers, policy staffers ā are the ones who produce political legitimacy itself. When that class loses faith in the systemās stability, the political climate may quickly become volatile.
As I get older I am more and more disturbed by the selective memory of the GFC; no mention of the tea party or the fallout from the austerity measures they pushed in the middle of the country; no mention how the bailout saved banks not homes. The Tea Party won, not Occupy, and the current government is doing things beyond the Kochās wildest dreams.
If and when there is a crash, these dumbass CEOs deserve /nothing/. Let them lose their vacation houses. And, maybe grow some balls and send the fraudsters to jail where they belong.
sigh
slop āfact checkingā is coming to LW:
https://www.lesswrong.com/posts/hhbibJGt2aQqKJLb7/shortform-1?commentId=fE5cg6pmWrChW8Rtu
wonder what model/prompt they will use. Prolly Grok
Chatbots are a cognitive hazard, part infinity: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking
I regret to inform you that the promptfans have a new fucked up way to thotpost:

transcript
screenshot from twitter. the search bar has the following search terms in it: āBCā ābefore claudeā a tweet body by @Jason_Dean reads: āI was born in 23 BC (Beforeā
Not beating the āthis is just Christianity with the labels removedā allegations.
Weāre doing revolutionary calendars, huh?
the only revolutions these people are capable of is spinning their office chairs
ā¦when itās one on a swivel instead of some fixed-position ergo thing they acquired after reading a thotleader blogpost
Semi-OT but a blog post where Iām just kinda gawking at the technology that saved my daughterās life and the absurdity of comparing it to what now first comes to mind when we talk of ātechā.
Beautiful. As a dad, thank you for sharing!
Im pretty sure most of this has already been posted to this thread (I know the āAI published a hit piece on meā thing was)but more Moltbook/Openclaw/whatever-itās-called nonsense
We are in the singularity this is so hard to explain to people.
Describes a normal thing, a think also talked about for several decades in science fiction now.
Amazing the singularity term has now been downgraded to āai stuff that is hard to explain to laymenā.
more proof that crypto scammers have metastasized to AI scammers
canāt believe scammers are loosing their jobs to AI
the full paper is here: https://x.com/alexwg/status/2022292731649777723 immediately two references to Nick Bostrom and Scott Alexander
Reads like bad blaseball fanfic
Posting for archival and indexing purposes: u/GorillasAreForEating found an Urbit post titled āQuis cancellat ipsos cancellores?ā which complains that Aella takes it on herself to exclude people and movements from the broader LessWrong/Effective Altruist community. The poster says that Aella was the anonymous person who pushed CFAR to finally do something about Brent Dill, because she was roommates with āPersephone.ā He or she does not quite say that any of the accusations were untrue, just that āan anonymous, unverified reportā says that some details were changed by an editor, and that her Medium post was of ādramatically lower fidelity, but higher memetic virulenceā than Brentās buddies investigating him behind closed doors (Dill posted about domming a 16-year-old who he met when she was 15). The poster accuses Aella of using substances and BDSM games to blur the line of consent.
The post names Joscha Bach as someone Aella tried to exclude. We recently talked abut Bachās attempt to get Jeffrey Epstein to fund an event where our friends would speak.
Often, people in messed-up situations point at a very similar situation and say āat least we are not like that.ā I hope that all of these people find friends who can give them perspective that none of these communities are healthy or just. Whether you are in to bull sessions or polyamory, there are healthy communities to explore in any medium-sized city!
The post names Joscha Bach as someone Aella tried to exclude.
You do not under any circumstances have to hand it to Aella
Its prudent to be skeptical of anonymous Internet posts, but its also prudent to read a Leverage staffer on how her boss āhad three long-term consensual relationships with women employed by Leverage Research or affiliated organizationsā, close the tab, and make a note to never have anything to do with anyone from that organization in the future.
context: I wanted to know if the open source projects currently being spammed with PRs would be safe from people running slop models on their computer if they werenāt able to use claude or whatever. Answer: yes, these things are still terrible
but while I was searching I found this comment and the fact that people hated it is so funny to me. Itās literally the person who posted the thread. less thinking and words, more hype links please.
conversation
https://www.reddit.com/r/LocalLLaMA/comments/1qvjonm/first_qwen3codernext_reap_is_out/o3jn5db/
32k context? is that usable for coding?
(OPās response, sitting at a steady -7 points)
LLMs are useless anyway so, okay-ish, depends on your task obviously
If LLMs were actually capable of solving actual hard tasks, youād want as much context as possible
A good way to think about is that tokens compress text roughly 1:4. If you have a 4MB codebase, it would need 1M tokens theoretically.
Thatās one way to start, then we get into the more debatable stuffā¦
Obviously text repeats a lot and doesnāt always encode new information each token. In fact, itās worse than that, as adding tokens can _reduce_ information contained in text, think inserting random stuff into a string representing dna. So to estimate how much ctx you need, think how much compressed information is in your codebase. That includes stuff like decisions (which LLMs are incapable of making), domain knowledge, or even stuff like why does double click have 33ms debounce and not 3ms or 100ms in your codebase which nobody ever wrote down. So take your codebase, compress it as a zip at normal compression level, and then think how large the output problem space is, shrink it down quadratically, and you have a good estimate of how much ctx you need for LLMs to solve the hardest problems in your codebase at any given point during token generation
*emphasis added by me
So take your codebase, compress it as a zip at normal compression level, and then think how large the output problem space is, shrink it down quadratically, and you have a good estimate of how much ctx you need for LLMs to solve the hardest problems in your codebase at any given point during token generation
wat
I can see what theyāre going for but that seems ⦠wildly guess-y?
Also code helper tools donāt even work like that, thereās an absurd amount of MCP and RAG based hand holding for the chatbot to even get a grip on what itās supposed to be doing at any given time.
Prompting an LLM with your entire code base isnāt really a thing, even though the hype makes it feel like it would be.
Baldur Bjarnason gives his thoughts on the software job market, predicting a collapse regardless of how AI shakes out:
If you model the impact of working LLM coding tools (big increase in productivity, little downside) where the bottlenecks are largely outside of coding, increases in coding automation mostly just reduce the need for labour. I.e. 10x increase means you need 10x fewer coders, collapsing the job market
If you model the impact of working LLM coding tools with no bottlenecks, then the increase in productivity massively increases the supply of undifferentiated software and the prices you can charge for any software drops through the floor, collapsing the job market
If the models increase output but are flawed, as in they produce too many defects or have major quality issues, Akerlofās market for lemons kicks in, bad products drive out good, value of software in the market heads south, collapsing the job market
If the model impact is largely fictitious, meaning this is all a scam and the perceived benefit is just a clusterfuck of cognitive hazards, then the financial bubble pop will be devastating, tech as an industry will largely be destroyed, and trust in software will be zero, collapsing the job market
I can only think of a few major offsetting forces:
- If the EU invests in replacing US software, bolstering the EU job market.
- China might have substantial unfulfilled domestic demand for software, propping up their job market
- Companies might find that declining software quality harms their bottom-line, leading to a Y2K-style investment in fixing their software stacks
But those donāt seem likely to do more than partially offset the decline. Kind of hoping Iām missing something
In March 2025, the large language model (LLM) GPT-4.5, developed by OpenAI in San Francisco, California, was judged by humans in a Turing test to be human 73% of the time ā more often than actual humans were. Moreover, readers even preferred literary texts generated by LLMs over those written by human experts.
do you know how hard it is to write something that aged poorly months before it was written? itās in the public consciousness that LLMs write like absolute shit in ways that are very easy to pick out once youāve been forced to read a bunch of LLM-extruded text. inb4 some asshole with AI psychosis pulls out ātechnically ChatGPTās more human than you are, look at the statisticsā regarding the 73% figure I guess. but you know when statistics donāt count!
A March 2025 survey by the Association for the Advancement of Artificial Intelligence in Washington DC found that 76% of leading researchers thought that scaling up current AI approaches would be āunlikelyā or āvery unlikelyā to yield AGI
[ā¦] What explains this disconnect? We suggest that the problem is part conceptual, because definitions of AGI are ambiguous and inconsistent; part emotional, because AGI raises fear of displacement and disruption; and part practical, as the term is entangled with commercial interests that can distort assessments.
no you see itās the leading researchers that are wrong. why are you being so emotional over AGI. we surveyed Some Assholes and they were pretty sure GPT was a human and you were a bot so⦠so there!
https://old.reddit.com/r/indieheads/comments/1r6x1ix/fresh_failure_the_air_is_on_fire_from_location/
I looked it up, and this one is credited to Glen Wexler, who is an actual artist with a pretty distinct style and yes, heās been incorporating AI into his process lately, and I guess he did use it here (those windows on those buildings are sus as hell, and the overall sharpness of the image just screams AI).
So itās not outright slop, but still pretty disappointing and incongruous coming from this band. Their last two records were examining our societyās alienation through technology, at times to the point of āphone bad!ā level nagging, but using the most literally destructive technology of them all is fine, as long as it helps keep the costs down, I guess?
And it just doesnāt look good, but come to think of it, most of their albums have bad cover art, itās almost like they do it on purpose. Love the music, though.
Itās too bad if true, I canāt unsee it now. for reference: https://failureband.bandcamp.com/album/location-lost
new episode of odium symposium. we look at rousseauās program for using universal education to turn woman into drones
need a word for the sort of tech āinnovationā that consists of inventing and monetizing new types of externalities which regulators arenāt willing to address. like how bird scooters arenāt a scam, but they profit off of littering sidewalk space so that ppl with disabilities canāt get around
EDIT: a similar, perhaps the same concept is innovation which functions by capturing or monopolizing resources that arenāt as yet understood to be resources. in the bird example, we donāt think of sidewalk space as a capturable resource, and yet
In economic terms itās less rent seeking and more rent creation. Like, taking advantage of public sidewalk space may not be a rent in the strictest sense given that the revenue model is still people paying for the service, but the ability to provide that service is absolutely predicated on taking over and monopolizing this public resource to the maximal degree possible.
By historical allegory, harkening back to the original destruction of the Commons, weāre looking at Enclosure 2: Frisco Drift.
Letās also not lose sight of the fact that those sidewalks arenāt a natural formation, and that itās the city government who ultimately takes on the burden of their construction and maintenance. This kind of neo-enclosure of public resources is then another kind of invisible subsidy.
parasitech?
maybe āparasitic innovationā?
Something like āinnovations in parasitic enclosureā may perhaps be a phrase that can give a handle on it, yeah
I guess that doesnāt emphasise the āinnovationā aspect much







