I am at the edge of my seat in anticipation!
Dont worry nvidia will give them a billion so they can pay nvida a billion for more gpu’s. Its the circle of grift.
Sounds more like money laundering.
If we could cause a mass wave of subscription cancellations we could finish them off. I know no on here is subscribing to them but if you know anyone else who is urge them to cancel their ChatGPT/OpenAI subscriptions. If you have the chance to provide feedback for why you canceled be sure to mention you’re unhappy OpenAI donated $25 million to Trump and unhappy about the ads.
Please please please let this be true, pleaseeee.
If there was a such thing as a free market it would go bankrupt….unfortunately we have facist idiots at the helm so I’m sure they’ll bail it out by cutting the last shred of medicade or scrapping school lunches for the poor

God I love this gif lol
I just wish they had done the sequel while Carl weathers was still alive.
Burst, you damn bubble. Burst!
I worked with AI daily as a programmer and let me tell you something people aren’t talking about.
People are myopic about the way they are looking at this whole data centre rush. Its not about speeding up ChatGPT or anything so prosaic. What they are doing is building the compute infrastructure to integrate AI into everything.
So what people need to realise is AI != ChatGPT. There are a plethora of different "AI"s that aren’t LLMs. And a lot of those non-LLMs are VERRRRY good at what they do.
They want to deploy a horde of AIs and apply them to all data sets, real-time or stored. They’ll have AIs watching you drive, and fining you, they’ll have marketing designed at warp speed tailored to fuck with you specifically. Recording conversations on the street and processing and aggregating them. They’ll start to tie together government DBs and use AIs to mine them for information.
They are automating control and power and im not seeing it talked about anywhere. They haven’t even begun their fuckery yet.
I’d like to add that this doesn’t even necessarily have to be intentional.
I’m certain the current bubble will pop sooner rather than later, but by that point (and today already as well), all the infrastructure, the data centers etc. will already have been built.
And they will not just disappear.
Some might be scavenged for parts to be sold off, but far more likely, I unfortunately think, is that governments will (be lobbied to) step in, and prevent the loss of hardware, of companies, and especially of jobs. They obviously won’t start the money-burning, consumer-facing ChatGPTs & Co again, so what else can we do with all that hardware that’s sitting there, looking for a purpose?
Exactly the thing the person above me said. Implement the surveillance state at an unprecedented scale and speed, because those GPUs need SOMETHING to do, lest all that capital be wasted.
There will be a bailout, and we’ll all suffer for it.
I think this is the lines they want people to think along. But I think it’s a lot worse. These data centres have very little to do with AI as the public currently imagine it.
I genuinely believe a lot of them arent being built to make money. They same way police don’t buy riot gear to make money.
They are building the hardware for a final stage boss level surveillance and control project, and dropping a smoke screen about AI bubbles and chat GPT while they do it.
palintir is already pretty integrated in targeting protesters, and i suspect another form AI is being used for “catching red light violations” at stops already.
There might be some people thinking along those lines, but I’d be careful to tread into the territory of conspiracy theories. To me this seems a lot more like a case of “The bad implications are pretty obvious when you think about it - but we can’t let that stand in the way of making a profit!”.
Yeah it’s definitely verging on conspiratorial. The data centre rush doesn’t really make sense in any other context imo.
I feel like the mask western govs have been wearing for the last 50 years are slipping and the old cunts are still there and now they have this new tool.
Even before LLMs people were ringing the alarm bells about the inevitable tying together of disparate GOV DBs, AI just gave them a way of utilising the data that didn’t exist before.
Unfortunately I feel like I will be completely vindicated on this one, and im not a conspiracy nut at all.
Social scores too. On all the video, on streets, in buildings, of every word captured, every web page visited, comment made. Half baked ai from the worst people like thiel will be giving social credit scores to secretly affect your job prospects, police and courts and government treatment, down to what results search engines show you, or prices that will flash for you specifically in digital price tags or online.
Europe is doing this now too, under age controls, and chat control and preventing child abuse, by child abuse they really mean preventing dissent on israel first and foremost, but everythinf else too. Their politicians are surrendering their populations to tech for a cut of the info. All while a far right is tbe only real reform choice, making them inevitable and then too the dissolution if the eu, and fixung of elections. All because bs like this, and the cynical far right will just retool it for themselves make no mistake.
You’re point about the far right being the only alternative is so true it hurt me a little inside to read. Its happening in the UK.
Worse than Orwellian. It’s the multivax from The Last Question embedded in Big Brother, and only for Big Brother.
It’s like a 90s era episode of The Outer Limits
I want off Mr Bones wild ride
OpenAI has been the weakest financial link for a while. Once it falls though… the whole thing implodes.
Its difficult to know what it’ll look like on the “other side” of the bubble popping. It’ll be very bad though. Maybe afterward we’ll be able to heal. Better be ready to fight… or hunker down.
its definitely going to hurt MS alot, maybe google, and any company that relies heavily on it.
I don’t see the bubble popping at all.
As a software engineer at a big tech org, there’s no way we’ll ever go back to the world before LLMs. It’s just too good to ignore. Does it replace software engineers? No, not all of them, but some. What previously required 70 engineers might now require 60. Five years from now, you might get by on even fewer engineers.
What could cause the bubble to pop? We’re rolling out AI code at scale, and we’re not seeing an increase in incidents or key metrics going down. Instead, we are shipping more and faster.
So maybe it’s too expensive? This could be the case, but even so, it’s just a matter of time before the cost goes down or a company figures out a workflow to use tokens more conservatively.
We’re rolling out AI code at scale, and we’re not seeing an increase in incidents or key metrics going down. Instead, we are shipping more and faster.
Of course you’re seeing nothing but good reports and increasing numbers. That’s what bubbles are. Nothing in reality is as good as they’re making the AI market look. It’s all wash trading. No one is actually using these products so there won’t be much complaints or bug reports will there? Yeah it must look really good from the inside looking out.
The reality is that real people hate your shitty broken AI products and want nothing to do with them.
I don’t want AI crammed into all the nooks and crannies either, but companies are using AI to advance productivity in very real ways, not just writing software. Just data analysis alone where you throw a bunch of sales data at an AI and have it spit out some less-intuitive trends that it’d take a team of people to suss out is an actual cost savings that can make-line-go-up.
I do agree that it’s a bubble for sure but just like the housing bubble, there is still a lot of underlying value that will stick around after the burst.
I get the AI hate around art. But it’s quite a naïve (and frankly shows just how little you understand about AI) view to talk about broken AI products because I use AI to write some unit tests for me.
I won’t go into details but pretty sure you use our product every day without reflecting over whether the code was written with the help of AI or not.
Art is one thing and I agree. But you make it sound like you’d hate mathematicians who decided to use calculators, or hated programmers who used the first programming languages. Real programs are built with machine code!!
We’re rolling out AI code at scale, and we’re not seeing an increase in incidents or key metrics going down. Instead, we are shipping more and faster.
Anecdotal, but I’ve had exactly the opposite experience as an engineer.
Interesting!
I have gone through my ups and downs. Lately I’ve been more and more convinced. I use Claude Code (Opus 4.5) hooked up to our internal atlassian and google drive mcps. I then ofc have to do a lot of writing (gathering requirements, writing context, etc) but instead of spending two days coding, I’ll spend half a day on this and then kick off a CC agent to carry it out.
I then do a self review when it’s done and a colleague reviews as well before merge.
And not for architectural work… Rather for features, fixing tech debt, etc.
This also has the benefit of jira tickets being 1000x better than in the pre-LLM era.
I’m primarily using Opus 4.5 as well (via Cursor). We’ve tried pointing it at JIRA/Confluence via MCP and just letting the agent do it’s thing, but we always get terrible results (even when starting with solid requirements and good documentation). Letting an agent run unsupervised just always makes a mess.
We never get code that conforms to the existing style and architecture patterns of our application, no matter how much we fuss with rules files or MCP context. We also frequently end up with solutions that compromise security, performance or both. Code reviews take longer than they used to (even with CodeRabbit doing a first pass review of every PR), and critical issues are still sneaking through the review process and out to prod.
My team has been diligent enough to avoid any major outages so far, but other teams in the organization have had major production outages that have all been traced back to AI generated code.
I’ve managed to carve out a workflow that does at least produce production-ready code, but it’s hardly efficient:
- Start in plan mode. Define what I need, provide context, and answer any qualifying questions from the model. Once I’m happy with the ‘plan’, I tell Cursor to save a hardcopy to my local machine. This is important, because it will serve as a rolling checkpoint for when Cursor inevitably crashes.
- Have the agent generate any unit tests we’ll need to validate this feature when it’s done.
- Review the generated unit tests and inevitably rewrite them. Tell Cursor to update the plan based on the changes I’ve made to the tests.
- Put the AI in “Ask” (so it doesn’t touch the code just yet) and tell it to summarize the first step of the plan. This makes sure that the step I care about is in the model’s context window so it doesn’t get confused or over-extend.
- Pop back to agent mode and tell the model to proceed with step 1 and then STOP.
- Review the model’s output for any issues. At this stage I’ll frequently point out flaws in the output and have the model correct them.
- Back to “ask” more, summarize the next step of the plan.
- Execute the next step, review the output, ask for changes, etc
- Repeat until all steps are complete.
- Run the unit tests, then, if there are failures, have the model try to fix those. 50% of the time it fixes any issues encountered here. The other 50% it just makes an enormous mess and I have to fix it myself.
- Once the unit tests are all passing, I need to review all of the generated code together to further check for any issues I missed (of which there are usually several)
- When I’m finally satisfied, I tell the agent to create the PR and the rest of the team very carefully reviews it.
- PR is approved and off we go to QA.
This is almost always slower than if I’d just written the code myself and hadn’t spent all that extra time babysitting the LLM. It’s also slower to debug if QA comes back with issues, because my understanding of the code is now worse than if I’d written it myself.
I’ve spoken about this in other comments, but I’m going to repeat it again here because I don’t see anyone else talking about it: When you write code yourself, your understanding of that code is always better. Think of it like taking notes. Studies have shown over and over that humans retain information better when they take notes — not because they refer back to those notes later (although that obviously helps), but because by actively engaging with the material while they’re absorbing it, they build more connections in the brain than they would by just passively listening. This is a fundamental feature in how we learn (active is better than passive), and with the rise of code generation, we’re creating a major learning gap.
There was a time when I could create a new feature and then six months later still remember all of the intimate details of the requirements I followed, the approach I took, and the compromises I had to make. Now? I’m lucky to retain that same information for 3 weeks, and I’m seeing the same in my coworkers.
When the dot com bubble popped it’s not like the internet went away. Everything you’re saying also applies to the internet, we didn’t go back to the way the world was before the internet.
Yet the bot com bubble popped.
The long term viability of a technology does not indicate whether there’s “irrational exuberance” in the short term. Buying up GPUs that’ll depreciate in a few years when there won’t be power to run them in that time frame? Yup it’s a bubble, and it will pop. That doesn’t mean the tech will go away. Just it will be used in more reasonable ways and developed over the next decade instead of it being “it will replace all jobs in field X within six months” while wasting cycles jamming it into everything to create numbers about it’s usage constantly rising by huge amounts.
Yeah that’s a very good point! Thanks!
The Dot Com bubble popping did not mean the end of the internet, just the end of “we’ll invest in any company that’s doing something on the internet.” It’ll be like that.
Tue pop won’t be so much about effectiveness, but rather about profitability. AI costs much more than it makes. When it pops, only those firms that are willing to run it at a loss will be able to offer it.
Look to the dotcom bubble my friend. As someone who was there, this bubble gonna pop.
Finally some good news!
You say that then the government will bail them out and the fed will be induced to underwrite their bonds, and or buy them in monetary easing, or like situations.
Not for nothing either, the admin would extract concessions and payouts ffom opeai for it.
You’re almost certainly spot on, I was just trying to be optimistic.
The problem with this theory is that the government is gutted, broke, and already reaching limits on borrowing and printing due to an economy so fragile its made of glass. They might actually not be able to afford to grant a meaningful bailout to AI companies.
This means they either try to anyway and everything collapses into an economic black hole. Or they hold off and let the AI companies collapse… and we still fall into an economic black hole but just a differently shaped one.
While this party I believe will reach a borrowing limit of sorts before they lose power, and de facto default by printing new money to pay off their debts and thereby watering down the value of the dollars, we still have a ways to go before that.
Because while we are untrustworthy, every other currency is also untrustworthy, and it’s not clear yet to investors how this is going to end up. They think things are going great, that we are embracing lies to keep the money flowing. Only too late will most realize anything.
This administration will take each and every opportunity to borrow as much money as they can. Personally benefiting from every dollar they spend. We have a ways to go before trust is dead with investors I believe.
OpenAI all of a sudden

1 like = 1 dollar removed from OpenAI
And yeah, this should be enough likes for them to completely run out of actual real money.
They will do nice financial report this year and IPO next year to return money to investors and screw people who buy stocks from them.
Eighteen months? They have that much lifeline? Wow.
“This is wework on steroids”
The entire industry is wework, not just OpenAI.
I know eighteen hours is a little of of reach. Can it be eighteen days instead?
The sooner, the better.










