I was in a group chat where the bro copied and pasted my question into ChatGPT, took a screenshot, and pasted it into the chat.
As a joke, I said that Gemini disagreed with the answer. He asked what it said.
I made up an answer and then said I did another fact check with Claude, and ran it through “blast processing” servers to “fact check with a million sources”.
He replies that ChatGPT 5.1 is getting unreliable even at the $200 a month model and considering switching to a smarter agent.
Guys - it’s not funny anymore.


As long as the newspaper clearly says it’s written by an LLM that’s fine with me. I can either completely ignore it or take it with a grain of salt. Truth is built on trust but trust should be a spectrum, you should never fully believe or fully dismiss something based on its source. There are some sources you can trust more than others, but there should always be some doubt. I have a fair amount of trust in LLMs because in my experience most of the time they are correct, I’d trust them more than something printed in Breitbart but less than something printed in the New York Times, but even with the new York times I watch out for anything that seems off.
You, along with most of this sub, seem to have zero trust in LLMs, which is fine, believe what you want. I’m not going to argue with you on that because I’m not going to be able to change your mind just as you won’t be able to change Trump’s mind on the new York times. I just want you to know that there are people who do trust LLMs and do think their responses are valuable and can be true.
I don’t think this is universal, that may be your expectation, but assuming it’s not something private or sensitive I’d be fine with my friend asking a third party. Like if I texted in a group chat that I’m having car troubles and asked if anyone knows what’s wrong I would not be offended if one of my friends texted back that they’re uncles a mechanic and said to try x. I would be offended if that person lied about it coming from their uncle or lied about their uncle being a mechanic, but in this case the person was very clear about the source of the information they got and it’s “credentials”. Part of the reason I may be asking someone something is if they don’t know the answer they may know someone who knows the answer and forward it on to them.
I don’t think this is true for every person, maybe for experts, but an AI agent is probably just as good as a layman on doing online research. Yes if you can ask an expert in the field to do the research for you they will be better then an AI agent but that’s rarely an option, most of the time it’s going to be you by yourself, or if your lucky a friend with some general knowledge of the area googling something and looking through the top 3-5 links and using those to synthesize an answer. An AI agent can do that just as well and may have more “knowledge” of the area than the person. Like chatgpt knows more about say the country of Bhutan then your average person, probably not as much as a Bhutanese person, but you probably don’t know a Bhutanese person and can’t ask them the question. It can even research the sources themselves or use a tool that rates the trustworthiness of a source to inform which one is true in a contradiction.