GitHub could ban a lot of bots of they took a close look at that repo and who starred it. Bet they will. Mmhmm. Yup. Any time now.
“In the fast-paced world of artificial intelligence, I’m constantly finding new ways to harness the power of technology. One of the most captivating aspects of AI models like GPT is their ability to “hallucinate” – generating completely new ideas and concepts that go beyond mere data processing. This capability underscores AI’s potential to create, not just analyze.”
First time I am seeing someone sell hallucination as a “feature not a bug”
~24000 stars as of now. Stars might possibly be bots too or techbros on AI overdrive
No project legitimately gets that many stars while having so few issues open
It is true when the repo is of consequence. Repos with ~20K stars generally have ~1000 open issues. But I assume even if not bots, people starring this could be trigger happy from AI coding and star everything that is related to vibe coding.
Genius

Closed as not planned
so much brainrot they burned themselves on accident
For those wondering, yes, that was a real issue submitted. There are other issues, and they are great.
Pure gold
Owner does not plan on getting their brain to function properly
I’m fairly certain that the account is run by a bot and every single repo and line of code and text (readme, comments, etc.) are LLM slop. It’s also probably part of a much larger bot network. Possibly an AI company experimenting with their latest slop-bots.
Privacy-First: No cameras required - uses WiFi signals for pose detection
That’s not how privacy works.
We won’t film you having sex, we will just know that you are having sex and the poses you prefer but that is fine.
I have a visceral “AI” sensor that triggers when I see these:
“Rust Implementation (v2)”
“Performance Benchmarks (Validated)”
Human beings don’t self-validate explicitly like that. AI loves doing it.
You generate code, there’s a bug, you ask for a fix, your AI of choise will always output with:
*** Fix build issue ***
*** End fix ***
and then call it “Version 2 (Validated)”.
Sometimes it’s more subtle, but you can feel it, it loves adding “confirmed”, “working”, “validated”.
👉: mission acquired
👊: bugs squashed
👍: code validated
👏: congratulations on this exquisite piece of software
✍️: ready to do more!
My sensor is much simpler. If I see emoji in headings or bulleted lists, I assume it’s shit. It might be AI slop, or it might just be kids getting overexcited with the little pictures, but both deserve suspicion and scrutiny.
If a bunch of the emoji don’t even make sense it can get in the bin.
Ahhh idk, I saw a lot of genuine repos do emojis, at least for headings. Even before LLMs.
I like them 'cause with the right amount, it makes a README easier to parse when quickly scrolling over it.
My changelog generation tools output emojis because our lives are too short to not use 🚀
As an ancient husk of a person, it all looks crack-addled to me. I don’t really see how you can parse out headings from emoji because their usage isn’t consistent.
This comment is so true 🚀🚀🚀
💪
I like putting the little pictures in my readmes sometimes. In my biologically generated repositories. Please don’t discriminate against neat little pictures you can just put in text 🐑.
me too!
admittedly imo they are being overused now though
Your whimsy hurts me
This comment has been confirmed and validated by an actual human being 👍
“I’m confident in my solution.”
Alarm bells.
I have a project with a bunch of compose files that define the services I self host. I “deploy” the project by sshing into my server and doing “git pull” which means I’m often making changes that don’t get tested before committing to source control. As a result I have long chains of commits like:
- refactor the sproingy widget
- refactor the sproingy widget v2
- refactor the sproingy widget working
- maybe the sproingy widget works this time?
- ok finally found the issue with refactor sproingy widget
- fix formatting of sproingy widget
And now I’m wondering if I’ve been an llm this whole time
No the AI would have called it fixed, “production-ready,” committed, and pushed after the first refactor.
Let me introduce you to Ansible
Make your changes in a new branch and rebase/squash when you push it to main.
This also means modifying your
git pullcommand to pull the correct branch. A small change perhaps, but may be harder than just committing to main lol.I had a similar problem with GitHub actions, it was hard to test without messing up the main repo history.
Why not just edit the YAML directly on the server via a command-line text editor or SSHFS and then push from there when it works?
Also the repo image
Forgot to put “make sure the project compiles” in his .md files. What an amateur.
anyone gonna cop the $1500 hour session for agentic engineering

I’ll charge you some beer and a pizza. You can donate the rest $1480
deleted by creator
My impostor syndrome suddenly vanished :)
I am no programmer and understand almost nothing of the documentation and yet somehow I can tell it’s all bullshit.
It reads like a kid making up words in an attempt to sound smart mixed up with the description for a shady Amazon product.
I guess it’s reading comprehension. Utter bullshit reeks the same regardless of the field.
That’s absolutely awesome!
I’m gonna start referring to this as ‘smelling AI slop’
You got the sense to sniff it out, even without programming experience. And that’s a damn good sense to have these days 👍
All of YC got bamboozled by this slop.
Apparently, these stars indicate that this is a good joke.












