You must log in or register to comment.
I assume the prompt is, go fuck yourself
The framing of ChatGPT as a mind-reader that needs to be coaxed into competence ignores that the model was trained to infer intent from text alone. When results disappoint, the model is doing exactly what it was designed to do. The real issue is prompts that ask for output without specifying constraints, not some mystical failure to read minds.



