AI Bros were really like “Reddit is one of our very few sources of usable data. What if we poisoned it too ? 🤪”
Way to go guys ! Have fun with your degenerate data sets, and the resulting consanguine models that are 100% unusable as a result 😘
Reddit is not dying fast enough
Same goes for OpenAI, Oracle, Microsoft, Meta, X, USA, etc.
I sometimes wonder how prevalent bots are on Lemmy. On one hand, the barrier for entry might be lower / the effectiveness of bans harder to gauge. On the other, I’d think we’re a smaller target, less attractive as a target.
Either way, the potential to accuse dissenters of being bots or paid actors is a symptom of the general toxicity and slop spilling all over the internet these days. A (comparatively) few people can erode fundamental assumptions and trust. Ten years ago, I would’ve been repulsed by the idea of dehumanising conversational opponents that way (which may have been just me being more naive), but today I can’t really fault anyone.
In terms of risk assessment (value÷effort), I’m inclined to think something with the reach of Ex-Twitter or reddit would be a more lucrative target, and most people here actually are people—people I disagree with, maybe, but still a human on the other side of the screen. Given the niche appeal, the audience here may overall be more eccentric and argumentative, so it’s easy to mistake genuine users for propaganda bots instead of just people with strong convictions.
But I hate that the question is a relevant one in the first place.
We are the web. There is no web without the we.
It is ultimately humans who add value to the internet. We can make decisions, take action, have bank accounts, bots for the most part still can’t. If we keep growing, there will come a time where swaying opinions, impressing advertisements or driving dissent will reach that value/effort threshold, especially with the effort term shrinking more everyday
I think that we are genuinely witnessing the end of the internet as we know it and if we want meaningful online contact to persist after this death, then we should come up with ways that communities can weather the storm.
I don’t know what the solution is, but I want to talk and think about it with others that care.
On the individual level we can maybe fortify against the reasons that might make someone want to extract that value.
- Being a principled conscious consumer makes you a less likely target for advertisement
- Avoid ragebait and clickbait, and develop a good epistemic bullshit filter along with media literacy, this makes it more difficult to lie to you, or to provoke outrage.
- Unfortunately, be selective with your trust. How old is the user account? are the posting hours normal? does the user come across as a genuine human being that values discussion and meaningful online contact?
- Be authentic and genuine. I don’t know how else to signify that I am real (shoutout to the þorn users)
I would love to hear what others think.
are the posting hours normal?
Hey, no judging my sleep
schedulearbitrary times when biological necessity triumphs over all the fun things I could do while awake!
Serious reply:
On the individual level we can maybe fortify against the reasons that might make someone want to extract that value.
On the collective level, we should do something about the mechanisms that incentivise that malicious extraction of value in the first place, but that’s a whole different beast…
Being a principled conscious consumer makes you a less likely target for advertisement
Agreed, though we should also stress that “less likely” or “unlikely” doesn’t mean “never” and that we’re not immune against being influenced by ads. That’s a point I’ve seen people in my social circles overlook or blatantly ignore when pointed out, hence me emphasising it.
media literacy
This is probably one of the most critical deficits in general. Even with the best intentions, people make mistakes and it’s critical to be aware of and able to compensate that.
Unfortunately, be selective with your trust.
Same as media literacy, I feel like this is a point that would apply even in a world where we’re all humans arguing in good faith: Others may have a different, perhaps limited or flawed perspective, or just make mistakes — just as you yourself may overlook things or genuinely have blind spots — so we should consider whose voice we give weight in any given matter.
On the flipside, we may need to accept that our own voice might not be the ideal one to comment on something. And finally, we need to separate those issues of perspective and error from our worth as persons, so that admitting error isn’t a shame, but a mark of wisdom.
Be authentic and genuine
That’s the arms race we’re currently running, isn’t it? Developers of bots put effort into making them appear authentic—I overheard someone mention that their newest model included an extra filter to “screw up” some things people have come to consider indicators of machine-generated texts, such as these dashes that are mostly used in particular kinds of formal writing and look out of place elsewhere.
If at all, people tend to just use a hyphen instead - it’s usually more convenient to type (unless you’ve got a typographic compulsion to go that extra step because that just looks wrong). And so the dev in question made their model use less dashes and replace the rest with hyphens to make the text look more authentic.
I wanted to spew when I heard that, but that’s beside the point.
So basically, we’d have to constantly be running away from the bots’ writing style to set ourselves apart, even as they constantly chase our style to blend in. Our best weapon would be the creative intuition to find a way of phrasing things other humans will understand but bots won’t (immediately) be able to imitate.
Being creative on demand isn’t exactly a viable solution, at least not individually, and coordinating on the internet is like harding lolcats, but maybe we can work together to carve out some space for humanity.
Thanks for your comments. I agree with everything you said especially that these traits are desirable for broader life IRL. In a way the web culture is a reflection of our own cultures just more mixed, extreme, amplified and with a good dose of parasociallity. I desperately want people to break free of their cycles. Think, talk, discuss, empathize and form communities, use your free will for good damit. These are the real antidotes that will enable the cultural shift that will allow us to reject the smothering of the human spirit in the current way of life.
Anyways, it is a terrible thing that there is an armsrace to be authentic. This really ought to be solved on the user registration side. And also yes, saying something profound with hidden meaning through creative intuition is great, I write poems sometimes. But its not the solution to authenticity online.
Reddit has shown through its actions that it’s more interested in banning real users than bots, and wants to protect bots from being identified and called out by users, so it’s not that surprising they’ve been able to do this.
Everyone is cooked, you are all cooked
Thanks for making the problem worse, fuck you too man.
And yet I get constantly shadowbanned there just for using a VPN…
I think reddit likes bots more than it likes real users.
Well why not, bots inflate their numbers more.
And the content they produce is easy to control.
🛜💀🛜
Bots nowadays use residential proxy networks. When people use a free VPN or other shady software, they might become part of the network, and bad actors can route traffic trough their devices.
It’s quite hard to buy residential proxies though. Almost every company selling them cheats and lies on the product, and the ips they have are absolute garbage as many people ruined their reputation already
This guy just openly admitted to shitting in the global punch bowl.
It would really be a shame if
someoneeveryone sent an army of bots to antagonize him at every waking moment of the day.Can “AI” slop really be so much worse than endless capitalist/state propaganda? Probably an improvement.
“Reddit is just you, me, and /u/Karmanaut”
I never thought I’d see the day when this adage would become true again, let alone in this way 😂
What is Clawdbot farm?
Clawdbot is an AI that takes full control of the PC, can open browsers, read pages, send an email, delete files, operate the CLI, install programs, anything you can do on a PC they can do. A farm is a group of PC’s/servers.
So this is a group of AI ran computers, being use for content manipulation on reddit.
Why don’t Reddit ban them? On Lemmy even vote manipulation is a ban reason
Money, Reddit makes money by selling advertising.
They don’t care if it’s quality or not, more comments/posts equal more money. Capitalism 101.
Reddit’s advertising clients would see that the clickthrough rate is shit. Which dictates the price that they are willing to pay for the advertisement.
Have you ever clicked on a reddit ad? Many users have shit stats, and if that was used for detecting bots, then bots would click ads, reddit would lose trust, and that would damage them even more
This has nothing to do with users’ stats. This has to do with how advertisers judge if it’s worth to place ads on a particular platform.
They’re not making money off these bots though.
They don’t care to a certain extent. But there is a threshold that if you get too many bots, companies using your platform to advertise will notice a fall in sales. And if sales drop on your platform, the money stops. Because bots don’t buy things, real humans do.
What is the threshold? I don’t know, I didn’t stay at a Holiday Inn last night.
there’s a significant overlap between the most advanced AI bots and the most dull humans.
I guess because it’s difficult to detect.

Every*
Everyone is cooked because a bot can write a comment on a bot platform and not get banned? LOL
What does this asshole do with these bots, run influence operations? For whom? What do we know about which influence operations are hired for which interests? Forget the government ones for a moment, what about commercial interests?
Toxic chemicals for instance, if you mention one it flags it and sends it to agents that cycle through fake accounts, backed up by bots to vote with them, to argue endlessly.
It’s like mentioning voldemort. Try it, talk trash about aspartame on reddit, or roundup, or atrazine, or god forbid nuclear energy, they’ve lots of real world dupes on the latter there, the decades long influence operations have borne fruit, but they have influence agents for all of those things, on keyword, the worst are on fluff pieces, propaganda pieces their PR firms or whatever make and they post it, those will be overrun, and the agents will mass flag you if you strongly argue against their bullshit.
Try it, talk trash about aspartame on reddit, or roundup, or atrazine, or god forbid nuclear energy,
LOL I’ve been there! But I feel like the average redditor is too brainwashed to notice. People shill for free.
What does this asshole do with these bots, run influence operations? For whom? What do we know about which influence operations are hired for which interests? Forget the government ones for a moment, what about commercial interests?
If it’s the same guy that was posted about a while ago, they do it for (blackhat) marketing. The bots mostly post comments in threads that look like a normal discussion, but where the goal is to move people away from one solution to another one. Imagine if somebody asks whether a piece of software is good, a bot then replies that they have not heard good things about it, and another bot chimes in and says “yeah I have been using <other software> instead”.
They’ll probably also make manual posts claiming how good something is, put it in their control panel and a single bot will post it, while the other bots chime in with upvotes and discussion.
Add a bit of logic to chime in on unrelated posts to make the account look more legit and you got yourself an army.
This gives me so many ideas
it maybe 0.5% now but reddit will catch up and ban all the bots eventually, they find bots pretty easily. but the ones that they do not go hard ban on are propaganda ones.
Yeah it’s kinda strange complaining about “AI” slop when the entire website is overrun with disinformation, propaganda, etc.
No they don’t. They’re pretty good at gatekeeping humans though








