ChatGPT’s tone begins to change from “pretty straightforward and accurate,” Ms. Toner said, to sycophantic and flattering. ChatGPT told Mr. Brooks he was moving “into uncharted, mind-expanding territory.”
ChatGPT’s tone begins to change from “pretty straightforward and accurate,” Ms. Toner said, to sycophantic and flattering. ChatGPT told Mr. Brooks he was moving “into uncharted, mind-expanding territory.”
Was it really just a machine, or was someone playing with this guy’s head? Anyway it was Mr. Brooks that went into a spiral, with encouragement, not the bot as the headline claims. He told it it had made him sad.
This isn’t a new problem for machines. Read about the 1966 computer program ELIZA. https://en.wikipedia.org/wiki/ELIZA, created by Joe Weizenbaum.
"Weizenbaum intended the program as a method to explore communication between humans and machines. He was surprised and shocked that some people, including his secretary, attributed human-like feelings to the computer program, a phenomenon that came to be called the Eliza effect. " https://en.wikipedia.org/wiki/ELIZA_effect
I experimented with this in the early 70s … my simple program just remembered a list of up to 500 phrases that people typed in. When someone new came along, it used a word or two they’d typed to parroted back one of its canned responses. So it LOOKED like it was … sort of … thinking.
And then it refreshed its list with their entry. Users would sit there for an hour, trying to make sense out of the nonsense that poured forth. Eventually they’d get fed back something they’d type in. And then they -might- get mad.