

This. Masayoshi son is selling furniture to YOLO more money into OpenAI.


This. Masayoshi son is selling furniture to YOLO more money into OpenAI.


I have no opinion, but I have to note that I keep reading āKeepAssXC ā¦ā


Pivot to bio-computing


Beff Jezos and friends have produced something other than tweets. Possibly. Maybe.


Looks like a hobby project of someone who has very particular views about computers.
Iām not sure what kind of neural network are they planning to run on a custom FPGA based GPU with 4GB RAM shared with CPU


āHow dare you suggest that we pivoted to SlopTok and smut because of money if something that we totally cannot do right now is more lucrative?ā


If you use your business to ādo businessā itās nice to have good catering.


Simon Willison writes a fawning blog post about the new āClaude skillsā (which are basically files with additional instructions for specific tasks for the bot to use)
How does he decide to demonstrate these awesome new capabilities?
By making a completely trash, seizure inducing GIFā¦
https://simonwillison.net/2025/Oct/16/claude-skills/
He even admits itās garbage. How do you even get to the point that you think thatās something you want to advertise? Even the big slop monger companies manage to cherry pick their demos.
Just felt like I got an aneurysm there.
(in unrelated things, first)


They will all be just this one:


Slack CEO responded there that it was all a ābilling mistakeā and that theyāll do better in the future and people are having none of it.
A rare orange site W, surprisingly heartwarming.


I call it: āscientificā racism. To me that gets the point across the best.


I wondered if this should be called a shitpost or an effortpost, then I wondered what would something that is both be called and I came up with āconstipationpostā.
So, great constipationpost?


Was jumpscared on my YouTube recommendations page by a video from AI safety peddler Rob Miles and decided to take a look.
It talked about how itās almost impossible to detect whether a model was deliberately trained to output some ābadā output (like vulnerable code) for some specific set of inputs.
Pretty mild as cult stuff goes, mostly anthropomorphizing and referring to such LLM as a āsleeper agentā. But maybe some of yāall will find it interesting.


Itās two guys in London and one guy in San Francisco. In London thereās presumably no OpenAI office, in SF, you canāt be at two places at once and Anthropic has more true believers/does more critihype.
Unrelated, few minutes before writing this a bona-fide cultist replied to the programming dev post. Cultist with the handle āBussyGyatt @feddit.orgā. Truly the dumbest timeline.


Yeah, didnāt even cross their mind that it could be wrong, because it looked ok.


Shamelessly posting link to my skeet thread (skeet trail?) on my experience with an (mandatory) AI chatbot workshop. Nothing that will surprise regulars here too much, but if you want to share the painā¦
https://bsky.app/profile/jfranek.bsky.social/post/3lxtdvr4xyc2q


Thatās how you get a codebase that kinda sorta works in a way but is more evolved than designed, full of security holes, slow as heck, and disorganized to the point where itās impossible to fix bugs, adds features, or understand whatās going on.
Well, one of the ways *glancing at the code Iām responsible for, sweating profusely*


resulting in one person getting bacon added to their ice cream in error
At first, I couldnāt believe that the staff didnāt catch that. But thinking about it, no, I totally can.


Heāre hoping that the more he spends time gooning the more heāll leave the rest of us alone. crosses fingers
(emphasis mine)
Look at this goober, trying to be THE cult leader.