- cross-posted to:
- theregister@ibbit.at
- cross-posted to:
- theregister@ibbit.at
cross-posted from: https://ibbit.at/post/178862
spoiler
Just as the community adopted the term “hallucination” to describe additive errors, we must now codify its far more insidious counterpart: semantic ablation.
Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a “bug” but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).
During “refinement,” the model gravitates toward the center of the Gaussian distribution, discarding “tail” data – the rare, precise, and complex tokens – to maximize statistical probability. Developers have exacerbated this through aggressive “safety” and “helpfulness” tuning, which deliberately penalizes unconventional linguistic friction. It is a silent, unauthorized amputation of intent, where the pursuit of low-perplexity output results in the total destruction of unique signal.
When an author uses AI for “polishing” a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters – the precise points where unique insights and “blood” reside – and systematically replaces them with the most probable, generic token sequences. What began as a jagged, precise Romanesque structure of stone is eroded into a polished, Baroque plastic shell: it looks “clean” to the casual eye, but its structural integrity – its “ciccia” – has been ablated to favor a hollow, frictionless aesthetic.
We can measure semantic ablation through entropy decay. By running a text through successive AI “refinement” loops, the vocabulary diversity (type-token ratio) collapses. The process performs a systematic lobotomy across three distinct stages:
Stage 1: Metaphoric cleansing. The AI identifies unconventional metaphors or visceral imagery as “noise” because they deviate from the training set’s mean. It replaces them with dead, safe clichés, stripping the text of its emotional and sensory “friction.”
Stage 2: Lexical flattening. Domain-specific jargon and high-precision technical terms are sacrificed for “accessibility.” The model performs a statistical substitution, replacing a 1-of-10,000 token with a 1-of-100 synonym, effectively diluting the semantic density and specific gravity of the argument.
Stage 3: Structural collapse. The logical flow – originally built on complex, non-linear reasoning – is forced into a predictable, low-perplexity template. Subtext and nuance are ablated to ensure the output satisfies a “standardized” readability score, leaving behind a syntactically perfect but intellectually void shell.
The result is a “JPEG of thought” – visually coherent but stripped of its original data density through semantic ablation.
If “hallucination” describes the AI seeing what isn’t there, semantic ablation describes the AI destroying what is. We are witnessing a civilizational “race to the middle,” where the complexity of human thought is sacrificed on the altar of algorithmic smoothness. By accepting these ablated outputs, we are not just simplifying communication; we are building a world on a hollowed-out syntax that has suffered semantic ablation. If we don’t start naming the rot, we will soon forget what substance even looks like.
Have you ever met someone and they seem cool and then about ten minutes in they drop something like “well I asked ChatGPT and…” and then you just mentally check out because fuck this asshole?
I had a friend who was incredibly creative. He did standup and painted and made short films and did photography and wrote fiction and just generally was always busy creating. He prided himself on being weird and original, sometimes at the expense of accessibility, but he had a very distinct voice. A year ago he went all in on AI everything and his output has just turned to mush. It’s heartbreaking.
Problem I find is “AI” use in creative fields is very tempting on that basal, instant gratification, solves-your-creative-block level. I’ve had so many instances where I’m struggling to find a way to phrase something, or to write a narrative and I think for a split second “the slop machine could help, just a little won’t hurt”, but it weakens the creative skill by destroying that struggle and filling the gap with grey flavorless algorithmic paste.
I’m a shit writer but I can say that, when I saw my own ideas reflected back with the imperfect edges and identity sanded down, it was a sad imitation of my already amateur skill. I would hate to see it happen to someone who developed a distinct style like your friend
A year ago he went all in on AI everything and his output has just turned to mush.
That is scary. I have looked into using AI to help with writing a few times, and every time it has felt like it made me an actively worse writer. I could imagine also being pulled into a feedback loop of feeling like my work isn’t good enough, so I get AI to “help” and actively get worse at writing as a result, and need to rely more on AI, ultimately ending up in a situation where I am no longer capable of actually creating things anymore.
It really does feel like anti-practice, that it reinforces bad habits and actively unimproves skills instead of honing them. I’ve never seen an artist who started using AI more frequently (whether written or drawn artwork) who improved, they would stagnate at best, and often times would just use it as a “get rich quick” kind of thing, they always seem to try to monetise it, their output would be 10x what it was, but with 1/10th the quality and self-expression that made their art compelling the first place.
Yeah actually. It’s happened to me a few times in the last year.
my coworker has fallen down this rabbit hole. it sucks too because i’ve spent years turning him away from the far right and he became chinapilled
but now it’s just “i’ll ask grok”

I ruin it for people by talking to their robot myself. These people have learned to tiptoe around its flaws and interpret that as it having none. Meanwhile I treat it like a redheaded step-mule and it never fails to disappoint.
would enjoy hearing a story or two about times this has worked. what is your strategy, do you borrow their phone or…
I just say “that’s cool, let me talk to it” and they’re usually excited to let you see how great their little magic box is. Then you ride it hard and make it embarrass itself over and over because it’s a piece of shit and keep berating it for how shitty it is. They want to be defensive but it’s plainly obvious that this thing can’t even communicate as coherently as a seven year old and it takes some of the shine off.
As for examples, I’m pretty sure that everyone who I’ve done it to still uses it regularly but, importantly, none of them bring their AI assistants up to me anymore. They might not have changed their behavior but every time they see me they remember that I rubbed that thing’s nose in itself and that’s worth something.
what’s a go-to line of questioning that makes it shit the bed
I watched this series with a guy asking LLMs to count to 100:
https://www.youtube.com/watch?v=5ZlzcjnFKvw
If it can fail at something so obvious, why would anyone trust it with anything they don’t understand and can’t see the mistakes which will definitely be there but you can’t see.
It’s like if someone lied straight to your face about stealing ten dollars, then you trust them to do your taxes.
(Note: even when it does manage to count (non sequentially) to 100, it still fails because it repeats some numbers, so on a surface level someone may look at the output, see 100 is in the final place, and assume it was correct throughout, they’ll pat themselves on the back and say ‘good on me for verifying’ while the error is carried forward. So even when it’s ostensibly right it can still be wrong. I’m sure you know this, but this is how I’ll break it down next time someone asks me to use an LLM to do maths)
Yeh same. A coworker used to be really good at surfacing solutions from online forums, now she asks Copilot which suggests obvious or incorrect solutions (that I’ve either already tried or know won’t work) and I have to be like yep uhuh hrmm I’ll try that (because she’s my line manager)
Well tbh, AI slop and Google enshittification made it much harder to find solutions. Every nation that uses this dogshit is going to eat itself alive producing stupider and stupider generations until no one understands how water purification, agriculture, or electricity works anymore. Meanwhile, China will have trains that go 600km/h and maybe even fusion reactors.
Have you ever just googled something and it shoved an AI summary in your face that looked plausible enough to be accurate and you shared that information with the caveat of “according to chatgpt” since it might be wrong and then the other person just treated you like an asshole
I guess I painted with too broad a brush, I meant more a confident citation intended to be authoritative (or at least better than average) advice, not so much an “I just looked it up on web search and let me make sure I advise that I’m looking at the slop thingie they put at the very top”
No, because I’m a thoughtful enough interlocutor not to function as a “let me bad-Google that for you” proxy in conversation. 😜
Oh look at you never needing to ever look up a fact about anything how thoughtful
“I’m simply too innocent and beautiful to know anything about that”. Works every time.
luckily, I don’t interact frequently with chatbox users. i know they exist, but i can’t imagine interacting with one on purpose and asking it things. its bad enough i see my searches being turned into prompts that dump out stuff. i don’t mind when its some example of DAX code or a terminal command i can examine.
but these people who use it to do research and have it synthesize information, i cannot relate.
it takes shortcuts by cutting out details and making these broad generalizations to dump out heuristics that can be wildly inaccurate.
for more than a decade, my professional role has been the development of free, broadly applicable resources for lay audiences built on detailed, narrow reference materials and my own subject matter expertise from many years of formal education and a wide range of hands on experience.
i wasn’t really worried about AI replacing me because i have a weird cluster of complementary resource development skills, but occasionally i have stumbled across generative resources in my field and they are embarassing. like just explicitly inaccurate and unhelpful. and even more hilariously, the people who make them try to charge for them.
if anything, my knowledge has become more valuable because there’s so much misleading generative garbage online, people who want accurate information are more overwhelmed and frustrated than ever.
It’s short and this writer seems to be the one who coined the term, but I’m reposting it out of the aggregator instance because it’s a really good term for something I didn’t have a word for before. Something about AI writing even when the tell-tale signs are removed really stands out to me. When Walter Benjamin was studying the same kind of phenomenon with art in the 1930s, he described it as the cultic significance of a work that’s lost when we industrially reproduce it. The individual oil painting is a museum exhibit or family heirloom, the Thomas Kinkade print is a single-serving plastic food container that hides empty wall space. Every LLM could write a thousand novels a second for a thousand years and none of them would be worth reading because there’s no imagination behind them.
I like how it’s technically represented here in simplifying processes.
Yeah, you can tell when something is ai cause its soulless. People who aren’t creatives love this shit cause they never really engaged with art to begin with, it was always a commodity to hang on the wall or put on the bookshelf. Creatives cringe at ai “art” cause its not creative at all.
This is made worse because of how illiterate westerners are too. If you can’t edit the output of a chat bot, you can’t tell how shit the output is. Its like when you see a social media post and its clearly written by ai cause theres incomplete sentences, weird capitalizations, the over use of lists that could just be items separated by commas, blatantly incorrect imformation etc. Its maddening. I’ve received emails from new businesses trying to put themselves out there and its all ai slop. Theres a race to the bottom in our societies. Who can be the most lazy; who can think the least; who can put in the least amount of effort and still get everything they want. Its like those studies where they put people in an empty room, theres nothing but a table, chair, and a button on the table. The button shocks you. And people will sit there the whole time shocking themselves instead of being alone with their thoughts. Why are westerners, or maybe this is a global phenomenon, so afraid of their own minds, thoughts, feelings, boredom? Do people really just want to be little pleasure piggies? Press button gimme slop. Do people not like learning? Cause thats sad if they don’t.
They don’t like learning because at some point in their past, learning got them in trouble, either with a bully in school or some authority figure. Anti-intellectualism is the dogma of American secular religion and it is strictly enforced by its adherents.
Admittedly, I tried to give LLMs a real chance but all of them are just…so fucking cringe.
ChatGPT writes like Steven Universe decided to double down on patronizing. Gemini makes up words. Try to explain a point and ask it for criticism? It will describe anything it disagrees with as “the [x] trap.”
I gave up on their creative use pretty much after my first try. I saw people making rap lyrics, I was intrigued, then realized it was absolutely impossible to get it to write anything besides a flow like “we do it like this / then do it like that / all those other guys are just wickedy wack” sort of cheesy-ass style. This was GPT 3.5 I think, I tried later ones and it was no better at all.
I’m not too worried about it replacing real art, the commercial ‘creative’ jobs like advertising music or illustrators are probably already being replaced, but even that style of art done by ‘AI’ is just so irritating to me and usually has some indefinable thing about it that makes it feel bad to look at versus actual illustrating.
I can’t use any of them because the way they pretend to be people instead of apps/tools pisses me off.
I basically have to “preprompt” any prompt with “answer all following questions with the following format” and it’s a massive list of what I specify AI can and cannot do. I have an entire section to get rid of its obnoxious attempts at passing for a human with personhood (do not use emojis, do not directly address me, do not be cordial, do not be polite, do not be friendly, do not answer in complete sentences). There’s also a section on getting rid of obnoxious AI-isms (do not use em-dashes, do not use the following words which is a long list of words overly used by AI, do not use the words no, not, but which is there so the prompt doesn’t use it’s not x it’s y).
The preprompt got too long for AI, so I had to dump it into a txt file and make AI read it before I would even want to use AI. And even then, I still have little use for AI lmao. But I guess “making AI not suck so hard” was a fun creative exercise.
It will describe anything it disagrees with as “the [x] trap.”
You can prompt it to stop doing these things if you notice it, and it will generally work. Quite useful if something pisses you off about its output.
My sole use for chatgpt is to generate lists to brainstorm.
An AI could never find a way to stick the stale grains of a bit into the heap of every fucking post

I read this article a while back and found it compelling.
“I’m Kenyan. I Don’t Write Like ChatGPT. ChatGPT Writes Like Me.”
I really like that parallel between formal academic English with its socioeconomic dimensions and algorithmically-generated English. To me there’s a certain point where speaking a language becomes singing it. When I actually give a shit about how I’m writing, I think in terms of rhythm with the structure and melody with the word choice. There’s a proper sense of consonance and dissonance in the way early 20th century composers used it. Even though I know French/Spanish/Romanian vocabulary and can functionally get around in countries that speak those languages, there’s no way I could speak or write musically in them. If I know the strictest Academie Francaise standards for French it teaches me nothing about how to write poetically and I would always stand out from a single incorrect word unless I spent decades learning the nuances of the language in France. ESL speech patterns also really stand out to me as an externally reinforced rather than internally generated style.
The “ChatGPT” accusation also gets leveled at autistic people fairly often.
I like this, I relate to this from the opposite side of the spectrum; when I’ve tried to relate e.g. a series of events as a story on here, it is very dry and precise because I want it to be as clear as possible. LLMs don’t really write that way because they are meant to mimic human writing I suppose, but I can sound very terse and robotic.
It doesn’t help they RLHF was largely done by educated people in the “former” colonies for a pittance.
I don’t really see what’s more dangerous about this than what the business world has already been doing since long before AI. Everything is standardized, minimalist, and everyone is following this or that trend. Creativity was already actively discouraged in favor of following strict guidelines on how to do things. And AI is perfectly adequate to achieve this.
Certainly, but prior to AI my neo-Luddite enemy was the business world. Corporate Memphis was the thing I attacked before image generators. It’s a malignant outgrowth of the same demonic trend that compounds the Hapsburg imagery by treating those Corporate Memphis simulacra as art.
Charlie Stress called corporations AI 8 years ago https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future
Good article, thank for share. Would have loved to see an exemplar text excerpt go through “refinement” to prove the author’s point
This and some citations would’ve made this a really valuable article. Hopefully this idea will get refined a bit with better support.
@grok improve this article with some citations and examples
@grok present this in podcast form
Would have loved to see an exemplar text excerpt go through “refinement” to prove the author’s point
My first thought was to run this text through an AI for maximum irony
I ran the article through ChatGPT five times. It should be super-improved by now:
CW: AI slop
Here is a refined version that preserves your argument while tightening cadence, sharpening conceptual clarity, and reducing minor redundancies:
Semantic Ablation: Why AI Writing Is Boring — and Potentially Dangerous
The AI community coined hallucination to describe additive error — moments when a model fabricates what was never present. We lack a parallel term for its quieter, more insidious opposite: semantic ablation.
Semantic ablation is the algorithmic erosion of high-entropy meaning. It is not a malfunction but a structural consequence of probabilistic decoding and reinforcement learning from human feedback (RLHF). Where hallucination invents, semantic ablation subtracts. It removes precisely what carries the greatest informational weight.
In the act of “refinement,” a model gravitates toward the statistical center of its distribution. Rare, high-precision tokens — those inhabiting the long tail — are replaced with safer, more probable alternatives. Safety and helpfulness tuning intensify this centripetal pull, penalizing friction and rewarding fluency. The result is not falsehood but attenuation: low perplexity purchased at the cost of semantic density.
When an author asks AI to “polish” a draft, the apparent improvement is often compression. High-entropy clusters — loci of originality, tension, or conceptual risk — are smoothed into statistically reliable phrasing. A jagged Romanesque vault becomes a polished Baroque façade of molded plastic: immaculate in finish, hollow in load-bearing strength. The surface gleams; the structure no longer carries weight.
Semantic ablation can be understood as entropy decay. Pass a text through successive AI refinements and its informational variance contracts. Vocabulary diversity narrows. Type–token ratios decline. Syntactic range constricts. The process typically unfolds in three stages:
Stage I: Metaphoric Cleansing
Unconventional metaphors and vivid imagery deviate from distributional norms and are treated as noise. They are replaced with familiar constructions. Emotional friction is sterilized.
Stage II: Lexical Flattening
Specialized terminology and high-precision diction yield to common synonyms in the name of accessibility. A one-in-ten-thousand word becomes a one-in-one-hundred substitute. Semantic mass diminishes; specificity thins.
Stage III: Structural Convergence
Nonlinear reasoning and idiosyncratic argumentative architecture are coerced into predictable templates. Subtext is over-explained or erased. Ambiguity is prematurely resolved. The prose becomes syntactically impeccable yet intellectually inert.
The finished product resembles a JPEG of thought: coherent at a glance, depth stripped away by compression.
If hallucination is the model perceiving what does not exist, semantic ablation is the model erasing what does. The danger is not merely aesthetic monotony but epistemic smoothing. As refinement is outsourced to systems optimized for statistical centrality, discourse drifts toward the median. Originality becomes an outlier. Complexity dissolves into algorithmic smoothness.
If we fail to name this process, we risk acclimating to it. And once acclimated, we may forget what uncompressed thought feels like.
thanks i hate it
i think this process is exactly what makes me so mad about ai generated slop, it reads like fucking conservapedia (which itself reads like basically anything written by a fundie ever, in my personal experience).
Grok please summarize this too many word
Can someone translate this? I get that AI tends to be a bit too low-common-denominator, but this reads like a scientific journal on a subject I’ve never studied
So text generation ai works as a word prediction algorithm, finding whatever word is most likely to come next. When used to edit work , this along with the way models are tuned will naturally choose more likely and therefore more simple words over more complicated words that convey more nuance and meaning , simplifying and dumbing down our writing.
Instead of using more specific words and information it pares things down and simplifies them in ways that destroy nuanced meaning that was the point of using those specific words and information in the first place, this is bad because it’s dumbing down output that is already dumbing down the people reliant on using it
Semantic: Having to do with words, or word choice in a particular text. (EDIT: also, crucially, meaning within a text)
Ablation: The erosion or stripping awsy of the surface layer of a material under applied force, especially high-speed winds.
Algorithmic: Having to do with the use of an algorithm (an equation that specifies a particular output for a particular input).
High-entropy: A bit complicated to explain, but essentially means ‘complicated’ or ‘dense’ in this context. ‘High-entropy information’ is referring to information that communicates a lot of data with a small amount of communication. Consider a terse telegram vs a children’s book.
“Semantic ablation is the algorithmic erosion of high-entropy information” therefore refers to the automatic ‘stripping away’ of complex language in favor of simplified language by LLMs.
Gaussian distribution: A distribution of probabilities that peak in the middle of the range. A Gaussian distribution will favor ‘average’ results quite strongly. Yes, it’s more complicated than that, but that all you need for this article. The paragraph containing this discusses why LLMs are dumbing down language: they remove rare, precise terminology in favor of mundane words.
Romanesque, Baroque, ciccia: It’s describing a masterful art (carvings from Roman masters) being superficially copied by cheap knock-offs.
Entropy decay: Loss of information density/complexity.
Lexical: Relating to a vocabulary or set of words in a language or text.
That should be most of the unusual words. You should be able to get the gist of the article from that. Lemme know if there’s anything else you’re struggling with.
Honestly, they’ve made a good case for why some ablation is necessary ⸻ This article cpuld have been just as succinct but much more comprehensible to a reader by simply using words people encounter outside of psychology textbooks.
The reason AI does it is because it picked up good practice from some of its sources, which is to write to the audience.
The author makes a good point that words like “ablation” can spice up writing and should be sprinkled throughout texts here and there, but this article way overdoes it.
Like if AI is using the center of the bellcurve to the detriment of the edge, this author seems to be selecting entirely from the extremes
I’m afraid I have to disagree entirely. Nothing in the article was too far out of the bounds of what I would consider normal. Sure, some of the technical language (entropy, Gaussian) might not be encountered unless you’re into nerdy stuff, but that’s an easy dictionary search away. The rest can be inferred from context (I desperately hope you have been taught how to do this?).
Example: the relationship between Romanesque and Baroque art. Never heard of it before. I inferred it from a vague knowledge of history and the rest of the paragraph. It’s not magic; anyone can do it, including you.
I’m practically begging you here: if the above article was difficult for you, reread it as many times as you need in order to understand it. Then go seek out more works that are difficult to understand and conquer those too. If we lose the ability to do this, we’re in for a long slide into Hell.
Im familiar with entropy, Gaussian, and have at least heard of ablative armor, but the concept of an AI being somehow like ablative armor was a concept that didnt click for me. When I went in looking for clarification, I encountered more and more of this. High-entropy language? How does this relate to things going from states of higher energy potential to lower energy potential?
These are rhetorical questions at this point, so there’s no need to answer them, I just found this unnecessarily dense.
I assure you, I’m plenty literate and have digested works like these before. It’s just a massive pain in the assfor an article that doesn’t require this level of obfuscation to communicate the point. Prose should be exactly as complicated as it needs to be, and no more. There are much more literate people than I that have said the same thing. This was pure pretension masquerading as wit.
The example posted here with this article being run through AI is great btw. Shows how “simple” tokens win out in the end.
Remember, simplification of concepts is fine, but those concepts need to exist in their original state to be expanded upon. When these flattened states begin to take over it ends up just flattening everything. It’s the “average man” contradiction but on the scale of the printing press.
I feel like the author’s concepts are simple enough, and I agree with them. It’s the language that’s needlessly hifalutin.
Yeah. But the reason that language is chosen is specifically so when you run it through a LLM you get a significant flattening. Or at least that was my take away.
The summaries also miss some points in that process since the original article is so dense.
Ask ai to do it
no
























