By definition LLM models need massive external input in order to improve. So they can’t really be disconnected. Top that off with them only being useful when you can interact with them from many or remote locations, and there’s just no way to really keep them secure. They need massive communication to accomplish anything useful, and there’s no real way to keep massive communication secure.
You don’t have to improve them out in the field. Just collect metrics on their behavior and train a central model on that data, then you upgrade the local models on each unit when they are brought in for maintenance. I’m simplifying, of course. And terrified.
By definition LLM models need massive external input in order to improve. So they can’t really be disconnected. Top that off with them only being useful when you can interact with them from many or remote locations, and there’s just no way to really keep them secure. They need massive communication to accomplish anything useful, and there’s no real way to keep massive communication secure.
You don’t have to improve them out in the field. Just collect metrics on their behavior and train a central model on that data, then you upgrade the local models on each unit when they are brought in for maintenance. I’m simplifying, of course. And terrified.