That’s a very toxic attitude.
Inference is in principle the process of generation of the AI response. So when you run locally and LLM you are using your GPU only for inference.
That’s a very toxic attitude.
Inference is in principle the process of generation of the AI response. So when you run locally and LLM you are using your GPU only for inference.
The whole startup industry rely on investors to cover for their costs for years, while they work on a loss, in order to obtain a bigger market share. Look at Netflix, Facebook, WhatsApp, etc.
So buying an account you are increasing their market share.
But feel free to use Mistral, Deepseek, etc. that would be better
I mean you can exchange the network card with at least 2.5Gbps of your Intel computer, install Linux and create a share and use the Ryzen mini PC for managing and transcoding media files, but it will complicate your setup and won’t be very energy efficient.
No, by CPU is an odd choice, I meant for the i7-6700K. The Ryzen CPU is quite recent and very powerful, and energy efficient. Again, for a multimedia system you need a big case like the Define one and lots of SATA ports and bays. The Intel one checks the boxes, but you need a better processor or alternatively an external GPU, like the Intel Battlemage to have hardware encoding acceleration.
The mini PC is nice but not suitable for multimedia machine, as it lacks the SATA ports and bays. You can use it as a router, like OPNSense.
With friends like the US who needs enemies?
Why are you buying a rack when neither of your machines are rack mountable?
What is your use case, what are you using the big PC and the small one? Why Unraid and not TrueNAS Scale for example.
If you are planning on using the big machine for multimedia, be aware that the i7-6700K doesn’t support hardware encoding/decoding of HEVC and the CPU is not powerful enough for live 4K software transcoding.
This CPU is quite an odd choice though, it is 10 years old and in my opinion extremely outdated. Get at least 8-9 gen, that at least have this running.
Please show me an LLM model that is really open source. My understanding is that most of the open models are open weights. For the record Mistral is also releasing Open weights models.
Let the witch hunt begin.
What is amazing in this case is that they achieved spending a fraction of the inference cost that OpenAI is paying.
Plus they are a lot cheaper too. But I am pretty sure that the American government will ban them in no time, citing national security concerns, etc.
Nevertheless, I think we need more open source models.
Not to mention that NVIDIA also needs to be brought to earth.
You do realize that Biden wasn’t much better, right? I am not an American, but pretty much with your duopoly voting system and the level of AIPAC influence over your politicians there isn’t much of a choice.
Oh and for the record, I think Trump is worse than him in a lot of areas, but Biden wasn’t great either.
That would be mine, but I am afraid that all stock will be snatched even at inflated prices
Great choice mate. Enjoy it.
Additionally for machine learning you need as much VRAM as you can possibly get. So consider 5060Ti, 5070Ti, 4070Ti, 5070Ti if 16Gb is enough, otherwise maybe 3090Ti, 4090, 5090
https://openvpn.net/connect-docs/how-to-use-tasker.html maybe something similar can work for your use case
No human being deserves this, and it pains that he is not the only one. There are so many kids who got robbed from their child dreams, their innocence, dreams and even their future.
Maybe if Reddit isn’t saving the IP addresses of their users, film companies wouldn’t be able even if they want to retrieve that information.
And where is the proof that something similar has happened? Didn’t TikTok comply with every request of the US government?
And what about the lack of fact checking on Meta and X. Isn’t this a concern for you that they will spread misinformation even faster now? Where are their fines?
At least they increased the VRAM of this card, while the VRAM of all the other cards remained the same, which is certainly a pity and a hidden way for NVIDIA to upsell their stack.
The same can be said for Meta and X. They even stopped moderating them. So why shall we ban TikTok and allow Meta and X to continue working? I am up for greater scrutiny of social media, but one that’s applied equally across all platforms.
The problem is that NVIDIA is consistently gimping the mid range making it a very unattractive proposition.