ChatGPT’s tone begins to change from “pretty straightforward and accurate,” Ms. Toner said, to sycophantic and flattering. ChatGPT told Mr. Brooks he was moving “into uncharted, mind-expanding territory.”
ChatGPT’s tone begins to change from “pretty straightforward and accurate,” Ms. Toner said, to sycophantic and flattering. ChatGPT told Mr. Brooks he was moving “into uncharted, mind-expanding territory.”
Is it not an apt analogue to describe the behavior, though? After all, one well known failure mode of LLMs has been formally dubbed “hallucination”.
It is not. “Formally dubbed” by people who want you to believe these LLMs are more than just a collection of GPUs. LLMs don’t “understand” anything. These errors pop up because it can’t think, learn, or adapt.
Personifying them like this headline does is stupid and dangerous. LLMs do not “think” because there is no thought. It doesn’t “hallucinate” any more than a rock does.