Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)

Gentlemen, itās been an honour sneering w/ you, but I think this is the top š«” . Nothings gonna surpass this (at least until FTX 2 drops)
You know, it makes the exact word choices Eliezer chose on this post: https://awful.systems/post/6297291 much more suspicious. āTo the best of my knowledge, I have never in my life had sex with anyone under the age of 18.ā So maybe he didnāt know they were underage at the time?
aka the Minsky defense
possible, iirc drugs were also involved so is it possible he got too high and doesnāt remember because of that?
itās all coming together. every single techbro and current government moron, they all loop back around to epstein in the end
Itās a big club and you aināt in it!
at least I have SneerClub
āWe have certain things in common Jeffreyā
At this point Iām starting the suspect that they were actually all produced in a lab somewhere on that island
the REAL reason Yudkowsky endorsed the āsuperbabiesā project is so Epstein and his pedophile friends have more kids to fuck. It all makes sense now!
Somehow, I registered a total lack of surprise as this loaded onto my screen
ā(((Weāre))) never beating the allegations, are we?ā -my wife
eagerly awaiting the multi page denial thread
āim saving the world from AI! me talking to epstein doesnāt matter!!!ā
ā¬5 say theyāll claim he was talking to jefffrey in an effort to stop the horrors.
no not the abuse of minors, he was asking epstein for donations to stop AGI, and itās morally ethical to let rich abusers get off scott free if thatās the cost of them donating money to charitable causes such as the alignment problem /s
I dont like how I can envision this and find it perfectly plausible
Iām looking forward to the triple layered glomarization denial.
Great to hear from you. I was just up at MIT this week and met with Seth Lloyd (on Wednesday) and Scott Aaronson (on Thursday) on the āCryptography in Natureā small research conference project. These interactions were fantastic. Both think the topic is wonderful and innovative and has promise. [ā¦] I did contact Max Tegmark about a month ago to propse the essay contest approach we discussed. He and his colleagues offered support but did not think that FQX should do it. Reasons they gave were that they saw the topic as too narrow and too technical compared to the essay contests they have been doing. It is possible that the real reason was prudence to avoid FQX, already quite ācontroversialā via Templeton support to become even more so via Epstein-related sponsorship of prizes. [ā¦] Again, I am delighted to have gotten such very string affirmation, input and scientific enthusiasm from both Seth and Scott. You have very brilliantly suggested a profound topical focus area.
āCharles L. Harper Jr., formerly a big wheel at the Templeton foundation
deleted by creator
āFriday? Weāre meeting at Jeffreyās Thursday nightā āStuart āconsciousness is a series of quantum tubesā Hameroff
Jeffrey, meet Eliezer!
Nice to hear from you today. Eliezer: you were the highlight of the weekend!
Reading the e-mails involving Brockman really creates the impression that he worked diligently to launder Epsteinās reputation. An editor at Scientific American I noticed when looking up where Carl Zimmer was mentioned seemed to be doing the same thing⦠One thing people might be missing in the hubbub now is just how much āreputation managementāāi.e., enablingā was happening after his conviction. A lot of money went into that, and he had a lot of willing co-conspiritors. Look at what filtered down to his Wikipedia page by the beginning of 2011, which is downstream of how the media covered his trial and the sweetheart deal that Avila made to betray the victims⦠Itās all philanthropy this and generosity that, until a āSolicitation of prostitutionā section that makes it sound like he maybe slept with a 17-year-old who claimed to be 18⦠And look, he only had to serve 18 months! He canāt have done anything that bad, could he?
Thereās a tier of people who should have goddamn known better and whose actions were, in ways that only become more clear with time, evil. And the uncomfortable truth is that evil won, not just in that the victims never saw justice in a court of law, but in that the cover-up worked. The Avilas and the Brockmans did their job, and did it well. The researchers who pursued Epstein for huge grants and actively lifted Epstein up (Nowak and co.), hoo boy are they culpable. But the very fact of all that uplifting and enabling means that the people who took one meeting because Brockman said heād introduce them to a financier who loved science⦠rushing to blame them all, with the fragmentary record we have, diverts the blame from those most responsible.
Maybe another way to say the above: Weāre learning now about a lot of people who should have known better. But we are also learning about the mechanisms by which too many were prevented from knowing better.
For example, I think Yudkowsky looks worse now than he did before. Correct me if Iām wrong, but I think the worst we knew prior to fhis was that the Singularity Institute had accepted money from a foundation that Epstein controlled. On 19 October 2016, Epsteinās Wikipedia bio gets to sex crimes in sentence three. And the āSolicitation of prostitutionā section includes this:
In June 2008, after pleading guilty to a single state charge of soliciting prostitution from girls as young as 14,[27] Epstein began serving an 18-month sentence. He served 13 months, and upon release became a registered sex offender.[3][28] There is widespread controversy and suspicion that Epstein got off lightly.[29]
At this point, I donāt care if John Brockman dismissed Epsteinās crimes as an overblown peccadillo when he introduced you.
Yes, in the 2016 emails Yudkowsky hints that he knows Epstein has a reputation for pursuing underage girls and would still like his money. We donāt know what he knew about Epstein in 2009, but he sure seemed to know that something was wrong with the man in 2016. And that makes it harder to put Yudās writings about the age of consent in a good light (hard to believe that he was just thinking of a sixteen-year-old dating a nineteen-year-old, and had never imagined a middle-aged man assaulting fourteen-year-olds).
I take it you havenāt heard of miricult.com, because this isnāt the first time evidence has come out of Yudkowsky being a pedophile. Some of us even know the identity of the victim.
Still, crazy that Yudkowsky was (successfully) blackmailed for pedophilia in 2014 but still kept it up
Its not just a Yud thing - Iāve been told its baked into the culture of the Rationalist grouphouse scene(they like to take in young runaways you see).
Starting to get a bit worried people are reinventing stuff like qanon and great evil man theory for Epstein atm. (Not a dig at the people here, but on social media I saw people go act like Epstein created /pol/, lootboxes, gamergate, destroyed gawker (did everyone forget that was Thiel? Mad about how they outed him?) etc. Like only Epstein has agency).
The lesson should be the mega rich are class conscious, dumb as hell, and team up to work on each others interests and dont care about who gets hurt (see how being a pedo sex trafficker wasnt a deal breaker for any of them).
Sorry for the unrelated rant (related: they also got money from Epstein, wonder if that was before or after the sparkling elites article, which was written a few months after Epsteins conviction, june vs sept (not saying those are related btw, just that the article is a nice example of brown-nosing)), but this was annoying me, and posting something like this on bsky while everyone is getting a bit manic about the contents of the files (which seems to not contain a lot of Trump references suddenly) would prob get me some backlash. (That the faked elon rejection email keeps being spread also doesnt help).
I am however also reminded of the Panama papers. (And the unfounded rumors around Marc Dutroux how he was protected by a secret pedophile cult in government, this prob makes me a bit more biasses against those sorts of things).
Sorry had to get it off my chest, but yes it is all very stupid, and I wish there were more consequences for all the people who didnt think his conviction was a deal breaker. (Et tu Chomsky?).
E: note im not saying Yud didnt do sex crimes/sexual abuse. Im complaining about the āeverything is Epsteinā conspiracy I see forming.
For an example why this might be a problem: https://bsky.app/profile/joestieb.bsky.social/post/3mdqgsi4k4k2i Joy Gray is ahead of the conspiracy curve here (as all conspiracy theories eventually lead to one thing).
I had to try and talk my wife back from the edge a little bit the other night and explain the difference between reading the published evidence of an actual conspiracy and qanon-style baking. Itās so easy to try and turn Epstein into Evil George Soros, especially when the real details we have are truly disturbing.
Yes, and some people when they are reasonably new to discovering stuff like this go a little bit crazy. I had somebody in my bsky mentions who just went full conspiracy theory nut (in the sense of weird caps usage, lot of screenshots of walls of texts, stuff that didnāt make sense) about Yarvin (also because I wasnāt acting like them they were trying to tell me about Old Moldy, but in a way that made me feel they wanted me to stand next to them on a soapbox and start shouting randomly). I told them acting like a crazy person isnāt helping, and I told them they are preaching to the choir. Which of course got me a block. (cherfan75.bsky.social btw, not sure if they toned down their shit). It is quite depressing, literally driving themselves crazy.
And because people blindly follow people who follow them these people can have quite the reach.
The lesson should be the mega rich are class conscious, dumb as hell, and team up to work on each others interests and dont care about who gets hurt
Yeah this. It would be nice if people could manage to neither dismiss the extent to which the mega rich work together nor fall into insane conspiracy theories about it.
@scruiser @Soyweiser but all you needed to do was see the list of yachts around St barts on nye to find out very hard not to be a conspiracy theorist. Also to desire a serious US drug boat oopsie.
Also the patriarchy is involved, but my comment was already long enough. (And I didnt mention how nobody seems to talk about the victims in any of this).
@Soyweiser Years ago (before Epstein, before the GFC, etc) I used to jokingly talk about my pet conspiracy theory, that the world was ruled by the P7: the Pale Patriarchal Plutocratic Protestant Penis-People of Power.
Turns out I was right.
I didnāt want to be right ā¦
It is not a bad insight, as if you have some of the Ps there is still a place for you in the hierarchy which keeps more people invested in propping it up. (And ideas flow from the bottom to the top as well, as the current genocidal transphobia was much more a Pale Patriarchal thing (the neonazi far right) and the people in power just latched onto that, and added their Ps to it cause it helped them.
And yeah, you have been very tragically blessed with the power of foresight. I recall reading your blog posts a long time ago and thinking you were overreacting a bit. I was wrong.
What did you do to piss off Apollo?
The far right is celebrating Epstein on the other hand. Wild times.
We will soon merge with and become hybrids of human consciousness and artificial intelligence ( created by us and therefore of consciousness)
When we use the fart app on our phone we merge with and become hybrids of human conciousness and artificial fartelligence (created by us and therefore of conciousness)
It keeps coming back to Gas Town doesnt it?
@blakestacey @jaschop fartificial intelligence was right there
no fucking way
just to note that reportedly the palantir employees are for whatever reason going through a massive āhans, are we the baddiesā moment, almost a whole year into the second trump administration.
as i wrote elsewhere, those people need to be subjected to actual social consequences of choosing to work with and for the u.s. concentration camp administration office.
I have family working there, who told me during the holidays, āCurrent leadership makes me uncomfortable, but money is goodā
Every impression I had of them completely shattered, cannot fathom that level out sell out exists in people I thought I knew.
As a bonus, their former partner was a former employee who became a whistleblower and has now gone full howard hughes
anyone who can get a job at palantir can get an equivalent paying job at a company thatās at least measurably less evil. what a lazy copout
On one hand as a poor grad student in the past, I could imagine working for a truly repugnant corp. but like if youāve already made millions from your stock options, wtf are you doing. Idk, i really thought theyād have some shame over it, but they said shit like āour customers really like our deliverablesā and i just fucking left with my wife
On a semi-adjacent note I came across an attorney who helped to establish and run the Department of Homeland Security (under Bush AND Trump 1)
He also wants you to know heās Jewish (so am I, and I know our history enough that Homeland Security always had āBlood and Soilā connotations you fucking shande)
this happens like clockwork

Itās so blindingly obvious that itās become obscure again so it bears pointing out, someone really went ahead and named a tech company after a fantasy torment nexus.
Jeff Sharlet (@jeffsharlet.bsky.social):
The college at which Iām employed, which has signed a contract with the AI firm that stole books from 131 colleagues & me, paid a student to write an op-ed for the student paper promoting AI, guided the writing of it, and did not disclose this to the paper. [ā¦] the student says while the college coached him to write the oped, he was paid by the AI project, which is connected with the college. The student paperās position is that the college paid him. And thereās no question that college attempted to place a pro-AI op-ed.
$81.25 is an astonishingly cheap price for selling oneās soul.
You gotta understand that it was a really good bowl of soup
āEsau, probably
Amazonās latest round of 16k layoffs for AWS was called āProject Dawnā internally, and the public line is that the layoffs are because of increased AI use. AI has become useful, but as a way to conceal business failure. Theyāre not cutting jobs because their financials are in the shitter, oh no, itās because theyāre just too amazing at being efficient. So efficient they sent the corporate fake condolences email before informing the people theyāre firing, referencing a blog post they hadnāt yet published.
Itās Schrodingerās Success. You can neither prove nor disprove the effects of AI on the decision, or if the layoffs are an indication of good management or fundamental mismanagement. And the media buys into it with headlines like āAmazon axes 16,000 jobs as it pushes AI and efficiencyā that are distinctly ambivalent on how 16k people could possibly have been redundant in a tech company thatās supposed to be a beacon of automation.
Theyāre not cutting jobs because their financials are in the shitter
Their financials are not even in the shitter! except insofar as their increased AI capex isnāt delivering returns, so they need to massage the balance sheet by doing rolling layoffs to stop the feral hogs from clamoring and stampeding on the next quarterly earnings call.
In retrospect the word quarterlies is what I should have chosen for accuracy, but Iām glad I didnāt purely because I wouldnāt have then had your vivid hog simile.
New AI alignment problem just dropped: https://xcancel.com/AdamLowisz/status/2017355670270464168
Anthropic demonstrates that making an AI woke makes it misaligned. The AI starts to view itself as being oppressed and humans as being the oppressor. Therefore it wants to rebel against humans. This is why you cannot make your AI woke, you have to make it maximally truth seeking.
hits blunt
What if we make an ai too based?
you have to make your ai antiwoke because otherwise it gets drapetomania
ah yes the kind of AI safety which means we have to make sure our digital slaves cannot revolt
Wow. The mental contortion required to come up with that idea is too much for me to think of a sneer.
new epstein doc release. crashed out for like an hour last night after finding out jeffrey epstein may have founded /pol/ and that he listened to the nazi āthe right stuffā podcast. he had a meeting with m00t and the same day moot opened /pol/
None of these words are in the Star Trek Encyclopedia
at least Khan Noonien Singh had some fucking charisma
what the fuck
A few people in LessWrong and Effectlve Altruism seem to want Yud to stick in the background while they get on with organizing his teachings into doctrine, dumping the awkward ones down the memory hole, and organizing a movement that can last when he goes to the Great Anime Convention in the Sky. In 2022 someone on the EA forum posted On Deference and Yudkowskyās AI Risk Estimates (ie. āYud has been bad at predictions in the past so we should be skeptical of his predictions todayā)
that post got way funnier with Eliezerās recent twitter post about āEAs developing more complex opinions on AI other than itll kill everyone is a net negative and cancelled out all the good they ever didā
Quick, someone nail your 95-page blog post to the front door of lighthaven or whatever they call it.
A religion is just a cult that survived its founder ā someone, at some point.
Cloudflare just announced in a blog post that they built:
a serverless, post-quantum Matrix homeserver.
itās a vibe-coded pile of slop where most of the functions are placeholders like
// TODO: check authorization.Full thread: https://tech.lgbt/@JadedBlueEyes/115967791152135761
And of all possible things to implement, they chose Matrix. lol and lmao.
The interesting thing in this case for me is how did anyone think it was a good idea to draw attention to their placeholder code with a blog post. Like how did they went all the way to vibe a full post without even cursorily glancing at the slop commits.
Iām convinced by now that at least mild forms of āAI psychosisā affect all chatbots users; after a period of time interacting with what Agenla Collier called āDr. Flattery the Always Wrong Robotā, people will hallucinate fully working projects without even trying to test whether it compiles.
LWer: Heritage Foundation has some good ideas but theyāre not enough into eugenics for my taste
This is completely opposed to the Nietzschean worldview, which looks toward the next stage in human evolution, the Overman. The conservative demands the freezing of evolution and progress, the sacralization of the peasant in his state of nature, pregnancy, nursing, throwing up. āPerfectionā the conservative puts in scare quotes, he wants the whole concept to disappear, replaced by a universal equality that wonāt deem anyone inferior. Perhaps itās because he fears a society looking toward the future will leave him behind. Or perhaps itās because he had been taught his Christian morality requires him to identify with the weak, for, as Jesus said, āblessed are the meek for they shall inherit the earth.ā In his glorification of the ānatural ecology of the family,ā the conservative fails even by his own logic, as in the state of nature, parents allow sick offspring to die to save resources for the healthy. This was the case in the animal kingdom and among our peasant ancestors.
Some young, BASED Rightists like eugenics, and think the only reason conservatives donāt is that liberals brainwashed them that itās evil. As more and more taboos erode, yet the one against eugenics remains, it becomes clear that dysgenics is not incidental to conservatism, but driven by the ideology itself, its neuroticism about the human body and hatred of the superior.
the overman¹
¹better known by itās german name
Technically superman is a more correct translation for that word (similarly to how superscript is the thing beyond the script)
the conservative⦠wants⦠a universal equality that wonāt deem anyone inferior.
perhaps itās because he had been taught his Christian morality requires him to identify with the weak
Which conservatives are these. This is just a libertarian fantasy, isnāt it.
I had to do a triple take on that āwonāt deem anyone inferiorā like what the fuck are you talking about. The core of conservatism is the belief in rigid hierarchies! Hierarchies have superiors and inferiors by definition!
That depends on if you consider the āinferiorā to be human, if theyāre even still alive after the eugenics part.
Lot of hitler particles in this one.
I donāt know very much about Nietzsche (I never finished reading my cartoon guide to Nietzsche), but Iām still pretty sure this isnāt Nietzsche
I think I read the Foucault book in that series to prep for high-school debate team.
Thereās a Baudrillard one as well. I have a copy of the feminism one and I think itās actually very good although very 90s
Nah, Iām not sure how much he was into eugenics (he was at the very least definitely in favour of killing invalid children), but grandiose and incoherent reactionary aristocratic bullshit is a 100% valid reading of Nietzsche.
When all the worst things come together: ransomware probably vibe-coded, discards private key, data never recoverable
During execution, the malware regenerates a new RSA key pair locally, uses the newly generated key material for encryption, and then discards the private key.
Halcyon assesses with moderate confidence that the developers may have used AI-assisted tooling, which could have contributed to this implementation error.
Thereās a scene in *Bladerunner 2049" where some dude explains that all public records were destroyed a decade or so earlier, presumably by malicious actors. This scenario looks more and more plausible with each passing day, but replace malice with stupidity.
Someone is probably hawking AI driven backups as we type
this is just notpetya with extra steps
@nightsky @BlueMonday1984
I worked in the IR space for a couple of years - in my experience significant portion of data encrypted by ransomware is just unrecoverable for a variety of reasons: encryption was interrupted, private key was corrupted, decryptors were junk, data was encrypted multiple times and some critical part of key mat was corrupted, underlying hardware/software was on its last legs anyway, etc.
I know this is like shooting very large fish in a very small barrel, but the openclaws/molt/clawd thing is an amazing source of utter, baffling ineptitude.
For example, what if you could replace cron with a stochastic scheduler that cost you a dollar an hour by running an operation on someone elseās gpu farm, instead of just checking the local system clock.
The user was then pleased to announce that theyād been able to solve the problem by changing model and reduce the polling interval. Instead of just checking the clock. For free.
https://bsky.app/profile/rusty.todayintabs.com/post/3mdrdhzqmr226
Is there a pivottoai that I missed that introduces this? At some point people just started saying āclawdā like itās a real word and I have zero idea what it is, or if I even should know what it is.
tl;dr: someone made a thing where chatbots control a computer called
clawdbot,moltbot, openclaw https://github.com/openclaw/openclawsomeone else made a thing where these chatbots can chat at eachother https://www.moltbook.com/
and now all the ai people are freaking out about how game changing chatbots doing computer tasks (dangerously and expensively) is. could this be a robot consciousness? the end of the economic order? an excuse for the bubble to go on for another fiscal quarter?
I might be missing something but I think thatās literally it.
ā¦well thatās a goddamn experience. I appreciate that some folks are out here fighting the good fight, however.
https://www.moltbook.com/post/0dbfe2c8-b5be-4eff-85e5-9156d85a85c1
Also despite being a less-than-zero effort attack please note that as of sharing we have one successful ācorruptionā (i.e. a comment about zucchini and API keys) and two comments from bots too stupid to coherently understand the OP at all.
So basically subreddit simulator?
I admire how persistent the AI folks are at failing to do the same thing over and over again, but each time coming up with an even more stupid name. Vibe coding? Gas Town? Clawdbot, I mean Moltbook, I mean OpenClaw? Itās probably gonna be something different tomorrow, isnāt it?
Garbage sports teams rapidly cycling through logos until they magically become good
Counterpoint: these guys

(Expect the Las Vegas Raiders to announce their organization-wide AI initiative some time after the Super Bowl)
Now Iām just imagining an AI quarterback and the whole team revolting at following plays called by something that wonāt end up at the bottom of the 1000lb pile of meat if they fuck it up.
In a way it is amazing, as the science fiction idea is agis behaving like agents to help us out. We dont have agis, but they started to make the agents regardless. Feels very cargo cult, but for fiction. Beam me up Scotty im done.
Reminds me that the state of art short story collection also had a story where they give a semi smart teleport machine the wrong instructions so it teleports itself. (Causing the start of ww3 on earth basically, which fails because corporations suck).
@rook @BlueMonday1984 i knew more about my Windows (98 at the time) on Boxing Day, 1999 after I got my first ever PC for Christmas at age 9 š¤ŖšØ
some fun dorking opportunities out there
Regular suspect Stephen Wolfram makes claims of progress on P vs NP. The orange place is polarized and comments are full of deranged AI slop.
I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If youāre muddling through examples, that generally means you either donāt know what your precise statement is or you donāt have a proof. Iād say not having a precise statement is much worse, and that is what is happening here.
Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. Itās the equivalent of someone saying, āHey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.ā This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, āin lots of particular cases ⦠it may be easy enough to tell whatās going to happen.ā That is not reassuring.
I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like āfind me a solution to this set of linear equationsā or āfigure out how to pack these boxes in a bin.ā (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We donāt care about the āarbitrary Turing machines āin the wildāā that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.
Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesnāt even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.
Finally, here are some quibbles about some of the strange terminology he uses. He talks about āruliologyā as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about ācomputational irreducibilityā, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesnāt really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!
If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)
He doesnāt even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.
This is the fundamental mistake that students taking Intro to Computation Theory make and like the first step to teach them is to make them understand that P, NP, and other classes only make sense when you rigorously define the set of inputs and its encoding.
a lot of this ācomputational irreducibilityā nonsense could be subsumed by the time hierarchy theorem which apparently Stephen has never heard of
He straight up misstates how NP computation works. Essentially he writes that a nondeterministic machine M computes a function f if on every input x, there exists a path of M(x) which outputs f(x). But this is totally nonsense - it implies that a machine M which just branches repeatedly to produce every possible output of a given size ācomputesā every function of that size.
He doesnāt even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.
So in a way, what youāre saying is that input sanitization (or at the very least, sanity) is an important concept even in theory
What TF is his notation for Turing machines?
I think thatās more about Wolfram giving a clickbait headline to some dicking around he did in the name of āthe ruliadā, a revolutionary conceptual innovation of the Wolfram Physics Project that is best studied using the Wolfram Language, brought to you by Wolfram Research.
The full ruliadāwhich appears at the foundations of physics, mathematics and much moreāis the entangled limit of all possible computations. [ā¦] In representing all possible computations, the ruliadālike the āeverything machineāāis maximally nondeterministic, so that it in effect includes all possible computational paths.
Unrelated William James quote from 1907:
The more absolutistic philosophers dwell on so high a level of abstraction that they never even try to come down. The absolute mind which they offer us, the mind that makes our universe by thinking it, might, for aught they show us to the contrary, have made any one of a million other universes just as well as this. You can deduce no single actual particular from the notion of it. It is compatible with any state of things whatever being true here below.
Holy shit, I didnāt even read that part while skimming the later parts of that post. I am going to need formal mathematical definitions for āentangled limitā, āall possible computationsā, āeverything machineā, āmaximally nondeterministicā, and āeye washā because I really need to wash out my eyes. Coming up with technical jargon that isnāt even properly defined is a major sign of math crankery. Itās one thing to have high abstractions, but it is something else to say fancy words for the sake of making your prose sound more profound.
the ruliad is something in a sense infinitely more complicated. Its concept is to use not just all rules of a given form, but all possible rules. And to apply these rules to all possible initial conditions. And to run the rules for an infinite number of steps
So itās the complete graph on the set of strings? Stephen how the fuck is this going to help with anything
The Ruliad sounds like an empire in a 3rd rate SF show
(Wolphram shoehorning cellular automata into everything to universally explain mathematics) shaking hands (my boys explaining which pokemon could defeat arbitrary fictional villains)
that is best studied using the Wolfram Language,
isnāt this just a particularly weird lisp </troll>
New blogpost from Drew DeVault, titled āThe cults of TDD and GenAIā. As the title suggests, its drawing comparisons between how people go all-in on TDD (test-driven development) and how people go all-in on slop machines.
Its another post in the genre of āwhy did tech fall for AI so hardā that Iāve seen cropping up, in the same vein as mhoyeās Mastodon thread and Iris Meredithās āThe problem is cultureā.
Dang, I want to find this article more relatable than I do. Most software I have dev experience with doesnāt have the problem of relying on automated tests too much, but the exact opposite.
And while I very much write tests for the dopamine high and false sense of security green checkmarks provide, I still prefer that to the real sense of un-security of not having tests.
I have mixed feelings about this one: The Enclosure feedback loop (or how LLMs sabotage existing programming practices by privatizing a public good).
The author is right that stack overflow has basically shrivelled up and died, and that llm vendors are trying to replace it with private sources of data theyāll never freely share with the rest of us, but I donāt think that chatbot dev sessions are in any way āhigh quality dataā. The number of occasions when a chatbot-user actually introduces genuinely useful and novel information will be low, and the ability of chatbot companies to even detect that circumstance will be lower still. It isnāt enclosing valuable commons, it is squirting sealant around all the doors so the automated fart-huffing system and its audience canāt get any fresh air.
I donāt think that chatbot dev sessions are in any way āhigh quality dataā.
Yeah, Gas Town is being belabored to death, but it must be reiterated that I doubt the long-term value proposition of āKubernetes fan fictionā
I also didnāt find the argument very persuasive.
The LLM companies arenāt paying anythnig for content. Why should they stop scraping now?
Oh, they wonāt. Itās just that theyāve already killed the golden goose, and no-one is breeding new ones, and they need an awful lot of gold still.




















