ChatGPT’s tone begins to change from “pretty straightforward and accurate,” Ms. Toner said, to sycophantic and flattering. ChatGPT told Mr. Brooks he was moving “into uncharted, mind-expanding territory.”
yeah so can i welcome to the club
LLMs cannot think, and cannot “go into a delusional spiral”. Whatever the article contains, it’s bullshit.
But you can!
You read the title but not even the summary much less the article
Don’t need to. Any writer trying to personify LLMs isn’t worth the bandwidth.
The writer didn’t. Whoever wrote the title did.
The article is about a chat bot leading a person into a delusional spiral. The title is just clickbait
Is it not an apt analogue to describe the behavior, though? After all, one well known failure mode of LLMs has been formally dubbed “hallucination”.
It is not. “Formally dubbed” by people who want you to believe these LLMs are more than just a collection of GPUs. LLMs don’t “understand” anything. These errors pop up because it can’t think, learn, or adapt.
Personifying them like this headline does is stupid and dangerous. LLMs do not “think” because there is no thought. It doesn’t “hallucinate” any more than a rock does.
Was it really just a machine, or was someone playing with this guy’s head? Anyway it was Mr. Brooks that went into a spiral, with encouragement, not the bot as the headline claims. He told it it had made him sad.
This isn’t a new problem for machines. Read about the 1966 computer program ELIZA. https://en.wikipedia.org/wiki/ELIZA, created by Joe Weizenbaum.
"Weizenbaum intended the program as a method to explore communication between humans and machines. He was surprised and shocked that some people, including his secretary, attributed human-like feelings to the computer program, a phenomenon that came to be called the Eliza effect. " https://en.wikipedia.org/wiki/ELIZA_effect
I experimented with this in the early 70s … my simple program just remembered a list of up to 500 phrases that people typed in. When someone new came along, it used a word or two they’d typed to parroted back one of its canned responses. So it LOOKED like it was … sort of … thinking.
And then it refreshed its list with their entry. Users would sit there for an hour, trying to make sense out of the nonsense that poured forth. Eventually they’d get fed back something they’d type in. And then they -might- get mad.
NYTimes going into a grifter spiral as usual with this BS anthropomorphism.
Here’s how it happens: talk to it more than asking a simple google-able question. That’s it.
I mean that happens with humans too. You have to stay in contact with the real world to not get caught up in delusions. This explains a lot of news stories, more or less.
Chatgpt 6 came out now but is running AI slop still open 🤡