For reference, the reason why this happens is because LLMs aren’t “next word predictors”, but rather “next token predictors”. Each word is broken into tokens, probably ‘blue’ and ‘berry’ for this case. The LLM doesn’t have any access to information below the token level, which means that it can’t count letters directly, but it has to rely on the “proximity” of the tokens in it’s training data. Because there’s a lot on the Internet about letters and strawberries, it counts the r instead of the b in ‘berry’. Chain of Thought (CoT) models like Deepseek-reasoner or ChatGPT-o3 feed their output back into themselves and are more likely to output the text ‘b l u e b e r r y’ which is the trick to doing this. The lack of sub-token information isn’t a critical flaw and doesn’t come up often in real world usecases, so there isn’t much energy dedicated to fixing it.
For reference, the reason why this happens is because LLMs aren’t “next word predictors”, but rather “next token predictors”. Each word is broken into tokens, probably ‘blue’ and ‘berry’ for this case. The LLM doesn’t have any access to information below the token level, which means that it can’t count letters directly, but it has to rely on the “proximity” of the tokens in it’s training data. Because there’s a lot on the Internet about letters and strawberries, it counts the r instead of the b in ‘berry’. Chain of Thought (CoT) models like Deepseek-reasoner or ChatGPT-o3 feed their output back into themselves and are more likely to output the text ‘b l u e b e r r y’ which is the trick to doing this. The lack of sub-token information isn’t a critical flaw and doesn’t come up often in real world usecases, so there isn’t much energy dedicated to fixing it.