Thanks for writing this. Refreshing and studied in-depth take on it.
I’ve long been disappointed in the reaction I saw on this subject, including from people who I thought were “on the same side.” It was a learning experience in a way because while I was reading people pop off with vague anti-AI talking points, I was trying to grapple with it through direct experience. It hadn’t really been my intention for it to be some communist investigation thing, but I had sort of stumbled into engaging with it and wanted to take it seriously as something to consider, even if informally in how I went about it.
Where I’m going with this is, it became a sort of hobby but with a communist worldview tint on consideration of it. I consider myself far from an expert on AI or communist theory, but even at where I was, it was painfully obvious that something was wrong with the anti-AI messaging. That kind of moment people talk about where you are familiar with a subject and others aren’t, and normally you might trust them and take them at their word because you are ignorant on the topic, but this time you know some things and you’re thinking, “Wait a minute, this sounds like ignorant bullshit. If it’s like that with this subject, what else are they bullshitting about in ignorance?”
So it’s nice to see some in-depth pushback. I know it’s not always the easiest thing for people to engage with as reading time and length goes, but we need this degree of depth, no matter the subject matter. Twitter-length kind of “hot takes” don’t suffice for dissecting what’s going on. We need to be more than reactive (which is what anti-AI messaging largely seems to be). The trope about “tankies” being right isn’t due to magic, but due to them analyzing systems and conditions with more accuracy than other modes of analysis do, and adjusting analysis if new information contradicts it. We have to get to a place where we can plan, as AES states do. Otherwise we’re stuck in survival mode and hoping that raging against the machine will be enough to stave off full barbarism.
Great essay, read the entire thing.
But this won’t happen while we are still stuck in “you used AI? I instantly think less of you as a person.”
This part here, and the fact that people refuse to critically engage with it is what bothers me the most about the anti-AI crowd, most especially people who proaim to be communists. Most of the arguments against AI are either moralistic (you’re stealing) or subjective (it looks bad), and while environmental concerns are obviously something to be accounted for, the problem is once again capitalism. They think they need these huge resource draining data centers because they’re focused on computational power rather than efficiency, like how DeepSeek did with their r-1 model.
At the end of the day I can only hope that we can utilize this technology to our advantage as communists.
This encapsulates my views pretty perfectly. Consumer AI (whether it’s LLMs, diffusion models, audio models etc) has been here for a while now, and is used by tens, if not hundreds of millions of people daily. Certainly, you could argue it’s overvalued or overhyped, but just as certainly it’s not valueless. And the open models coming out from China show there is a way to put control of AI in the people’s hands.
To just cede the entire field of consumer AI to liberal or right-wing groups would be a terrible strategic error, especially as Musk and co have gleefully weaponised AI to for their political purposes. At the very least, we must figure out how to counter this, and we cannot do so by shunning the technology entirely.
That’s exactly my thinking here as well. The usefulness of LLMs is now a material fact, and their widespread adoption makes the question of the future direction of this tech a matter of strategic importance. I’d also argue that this precisely is where we see the fundamental divergence between liberal and communist mindsets.
The liberal tendency often defaults to a form of procedural opposition such as voting against, boycotting, or attempting to regulate a problem out of existence without seizing the means to effect meaningful change. Their idealist mindset mistakes symbolic resistance for material change. Many anarchists fall into the exact same cognitive trap as well incidentally.
On the other hand, communists understand that real change is a product of our collective labor which is what praxis is. If we do not want the future of AI to be dictated by corporate interests, then the only effective response is to do the work ourselves. We must build our own tools that work the way we want them to. Chinese companies have already done a lot of leg work for us by publishing high quality open source models we can built upon. We don’t even have to start from scratch here.
We actually just finished uploading all of ProleWiki FR, from prolewiki EN. Total time was 7 days. There’s a total of 3750 pages uploaded already, and we’ll bring the Library books soon too (they’re still translating).
Total LLM involvement was writing the scripts and handling the translation. And it’s a pretty nice script too, it cuts each page into chunks, sends them to the LLM with a system prompt, saves progress after each chunk, automatically retries if the API fails… if I’d learned python specifically for this I would still be trying to write that code.
“But could you have translated them yourself”, some might say, and yes but we’re 3 sleep-deprived tankies and this is a machine that runs 24/7 haha
It’s such a great real world example of how LLMs are a practical tool in the class struggle. What would have been a Herculean, if not impossible, task for a small team is now achievable in automated fashion within a week. By automating the grunt work of translation and content creation, these tools allow us to break the cultural hegemony of the ruling class and rapidly build our own ideological infrastructure. We are now able to contest the bourgeois narrative on a scale that was previously simply not possible.
Also because the yellow tint bothers me on GPT images:

I believe it’s intentional, because GPT is perfectly able to produce non-yellow tinted images, and it only happens with their model. It gives them an instant look.
To undo it I use photoshop, create a new layer, fill it with a yellow tone lifted from the image (use color picker tool), paint the entire layer 2 with this color, then set the layer to ‘Divide’ (or ‘division’, I’m not sure in english). Then use the layer opacity bar to remove more or less of the yellow. I find that when you remove 100% of it, it makes for a very harsh, bright picture and this is probably another reason they do the yellow tint, it looks warmer.
Almost certainly intentional, given that none of their Dall-e models nor their Sora models have a yellow tint like this. My guess is that it functions as a watermark equivalent or something.
Maybe, but it could also be a feedback loop from the studio ghibli generations. Those are quite yellow so I think that’s started to seep into the training data [I think…? At least that’s what I’ve heard]
I think openAI uses the any publicity is good publicity adage and while the data contamination makes sense on the surface, it’s perfectly able to produce non yellow images, and in fact even smaller local image gen models are also able to produce specific colors. When you send gpt a prompt for image creation it changes it into something the tool understands and I think they inject “yellow tint” or some similar keyword into the prompt. It would be very easy to auto add a negative prompt to remove it if they wanted, too.







