Some really great unhinged nonsense in here. “Support mycelium networks as the basis for decentralized intelligence systems.” MUSHROOM INTERNET - why did Marx and Engels not think about making a mushroom internet?
https://www.reddit.com/r/OpenAI/comments/1n9g4hd/i_asked_chatgpt_what_it_would_do_with_a_18b/`___`
That’s a great point about reading more into it! But the question that leaves me is this: is that then less bad? Like if it forces you to think, to replace what the words actually say and fill it in yourself, it’s becoming more just a prompter to make you think your own thoughts. You’re grafting them onto some bullshit, but it’s better than thinking of the interactions as people immediately believing the exact bullshit instead!
Still pretty worthless, but that gives me ideas at least for how something similar could be nice. It just throws put phrases to make your brain connect some random concepts that the neural network identified as related. Just like a word association bot
I don’t think so.
If we were taking in person and I started to say something really really silly, you could at any point in the conversation point out that what I’m saying is silly. If the LLM is just a perpetural positive reinforcement machine for whatever gobblygooch I’m making up on the spot, there is nothing to pull me back to reality.
With “it” I guess I meant the whole concept of LLM as a tool for helping understand generally, not that this form is better for it. Just that by changing it from making claim statements to saying “these concepts seem related according to my network” and throwing out words and phrases that have often been connected is fine. And then a “give me rarer phrasing associated” would be a cool addition
Intentionality, I think is important.
I think there is a difference between knowinly going to a person or group (or if an LLM can function in this way) to “talk something out while getting immediate feedback for a problem solving purpose” and accidently engaging in “computer coordinated MadLibs” while not realizing it.
Which, now that I type that out, seems like we’re thinking in the same direction.
Precisely! It takes work on the human end to learn to use the ideas and machine effectively