

I mean, arguably an open FTP server would have been better because how many kids these days could actually use an FTP client?
I mean, arguably an open FTP server would have been better because how many kids these days could actually use an FTP client?
I donāt know. Based on what theyāre describing I think it would probably fail in the direction of being deeply boring rather than really getting into the wild nonsense that the concept deserves. Now, it may be salvageable with the introduction of some robotic silhouettes, but given these peopleās penchant for never shutting the hell up even that may not be a good fit.
Gotta get those clicks where you can, amirite?
Damn you, Scott! Stop making me agree with people who created blockchain-based dating apps!
Neopets at least brought joy to a generation of nascent furries. Copilot is fixing to have the exact opposite impact on internet infrastructure.
Pretty much. Our friend up top (diz/OP) has made a slight hobby of poking the latest and greatest LLM releases with variants of these puzzles to try and explore the limitations of LLM ācognitionā.
The way rationalists use āpriorsā and other bayesian language is closer to how cults use jargon and special meanings to isolate members and tie them more closely to the primary information source (the cult leader). It also serves as a way to perform allegiance to the cultās ideology, which is I think whatās happening here
Grumble grumble. I donāt think that āoptimizingā is really a factor here, since a lot of times the preferred construct is either equivalent (such that) or more verbose (a nonzero chance that). Instead itās more likely a combination of simple repetition (like how Iāve been calling everyone āmateā since getting stuck into Taskmaster NZ) and identity performance (look how smart I am with my smart people words).
When optimization does factor in its less tied to the specific culture of tech/finance bros than it is a simple response to the environment and technology theyāre using. Like, Iāve seen the same āACKā used in networking and in older radio nerds because it fills an important role.
What exactly would constitute good news about which sorts of humans ChatGPT can eat?
Maybe like with standard cannibalism they lose the ability to post after being consumed?
Maybe āstorytellerā would be more accurate? Like, the prompt outputs were pretty obviously real and I can totally buy that he asked it to write an apology letter while dicking around waiting for Replit to restore a backup, but the question becomes whether he was just goofing off and playing into his role to make the story more memable or whether he was actually that naive.
Ferryman 1 calls to Gwaihir, the Lord of Eagles for aid, and The Windlord answers to fly him back across.
The downhill is honestly glorious because it seems so proud of itself when the real magic is that the boatmen can magically teleport back to the right bank under certain arcane circumstances.
Ouch. Also, Iām raging and didnāt even realize I had barbarian levels.
I feel like the greatest harm that the NYT does with these stories is not inflicting allowing the knowledge of just how weird and pathetic these people are to be part of the story. Like, even if you do actually think that this nothingburger āaffirmative actionā angle somehow matters, the fact that the people making this information available and pushing this narrative are either conservative pundits or sad internet nazis who stopped maturing at age 15 is important context.
Honestly Iām surprised that AI slop doesnāt already fall into that category, but I guess as a community weāre definitionally on the farthest fringes of AI skepticism.
From the Q&A:
Q: I feel like this is just a dressed up/fancy version of bog standard anti-AI bias, like the people who complain about how much water it uses or whatever. The best AI models are already superhuman communicators; itās crazy to claim that I shouldnāt use them to pad out my prose when Iām really more an ideas person.
Wait what?
like the people who complain about how much water it uses or whatever.
I justā¦
or whatever.
Lol. Lmao. I laugh to not cry.
I feel like this response is still falling for the trick on some level. Of course itās going to āact contriteā and talk about how it āpanickedā because it was trained on human conversations and while that no doubt included a lot of Supernatural fanfic the reinforcement learning process is going to focus on the patterns of a helpful asistant rather than a barely-caged demon. Thatās the role itās trying to play and the work itās cribbing the script from includes a whole lot of shitposts about solving problems with ārm -rf /ā
Ah, the eternal curse.
āYou sound like you lead a very interesting lifeā
āā¦yeeeeesss?ā (Closes 50 Wikipedia tabs that relate to literally nothing you intend to do)
Copy/pasting a post I made in the DSP driver subreddit that I might expand over at morewrite because itās a case study in how machine learning algorithms can create massive problems even when they actually work pretty well.
Itās a machine learning system, not an actual human boss. The system is set up to try and find the breaking point, where if you finish your route on time it assumes you can handle a little bit more and if you donāt it backs off.
The real problem is that everything else in the organization is set up so that finishing your routes on time is a minimum standard while the algorithm that creates the routes is designed to make doing so just barely possible. Because itās not fully individualized, this means that doing things like skipping breaks and waiving your lunch (which the system doesnāt appear to recognize as options) effectively push the edge of what the system thinks is possible out a full extra hour, and then the rest of the organization (including the decision-makers about who gets to keep their job) turn that edge into the standard. And thatās how you end up where we are now, where actually taking your legally-protected breaks is at best a luxury for top performers or people who get an easy route for the day, rather than a fundamental part of keeping everyone doing the job sane and healthy.
Part of that organizational problem is also in the DSP setup itself, since it allows Amazon to avoid taking responsibility or accountability for those decisions. All they have to do is make sure their instructions to the DSP donāt explicitly call for anything illegal and they get to deflect all criticism (or LNI inquiries) away from themselves and towards the individual DSP, and if anyone becomes too much of a problem they can pretend to address it by cutting that DSP.
It feels very strange to see this kind of statistic get touted, since a 50% success rate would be absolutely unacceptable for one of those software engineers and itās not suggested that if given more time the AI is eventually getting there.
Rather, the usual fail state is to confidently present a plausible-looking product that absolutely fails to do what it was supposed to do, something that would get a human fired so quickly.