I really can’t understand this LLM hype (note, I think models used for finding cures to diseases and other sciences are a good thing. I’m referring to the general populace LLM hype).
It’s not interesting. To me, computers were so cool and interesting because of what you can do yourself, with just the hardware and learning code. It’s awesome. What I don’t find interesting in any way is typing a prompt. “But bro, prompt engineer!” that is about the stupidest fucking thing I’ve ever heard.
How anyone thinks its anything beyond a parlor trick baffles me. Plus, you’re literally just playing with a toy made by billionaires to fuck the planet and the rest of us over even more.
And yes, to a point I realize “coding” is similar to “prompting” the computers hardware…if that was even an argument someone would try to make. I think we can agree it’s nowhere near the same thing.
I would like to see if there is a correlation between TikTok addicts and LLM believers. I could guarantee it’s probably very high.
Internal consistency is also usually considered a good thing. Any individual sentence an LLMbecile generates is usually grammatically correct and internally consistent (though I have caught sentences whose endings have contradicted the beginning here and there), but as soon as you reach a second sentence the odds of finding a direct contradiction mount.
LLMbeciles are just not very good for anything.
Some models are better than others at holding context. They all wander at some point if you push them though. Ironically, the newer versions that have a “thinking mode” are worse because of this, the context gets stretched out and they start second guessing even correct answers.
Indeed. The reasoning models can get incredibly funny to watch. I had one (DeepSeek) spinning around for over 850 seconds only to have it come up with the wrong answer to a simple maths question.