Mmm, my local Aldi was dirty with sad looking produce and really cheap tasting boxed stuff last time I went. Not even close to Costco TBH, and not even that cheap either.
This is in the southern US though, could be totally flipped elsewhere.
Mmm, my local Aldi was dirty with sad looking produce and really cheap tasting boxed stuff last time I went. Not even close to Costco TBH, and not even that cheap either.
This is in the southern US though, could be totally flipped elsewhere.
Vaccines are gigantic cost savers. Even assuming most “tough it out,” vaccines are massively cheaper than treating those who show up to the ER.
+1
Don’t feed the trolls.
Kneejerk reacting to Trump’s stupidity by finding something ancillary to blame it on him only feeds the troll in chief. It’s disrespectful.
Ignore the rage bait, take occam’s razor based on what’s public, and let the investigators do their jobs.
And once again, Trump controlled the conversation, and all the actual evidence is out there window for most people.
It would be awesome if there was an unspoken “don’t feed the troll” understanding among journalists, influencers, mods, forum commenters, everyone. When Trump says something outrageously stupid, just… briefly acknowledge it, and then ignore it and proceed as usual. Like he doesn’t exist.
That’s a narcissist’s worst fear.
Sure, he’d bounce around in the conservative echo chamber, but at least he wouldn’t pull more people in.
Yes! Try this model: https://huggingface.co/arcee-ai/Virtuoso-Small-v2
Or the 14B thinking model: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
But for speed and coherence, instead of ollama, I’d recommend running it through Aphrodite or TabbyAPI as a backend, depending if you prioritize speed or long inputs. They both act as generic OpenAI endpoints.
I’ll even step you through it and upload a quantization for your card, if you want, as it looks like there’s not a good-sized exl2 on huggingface.
Oh, now I remember. His press conferences during the first time were like rallies, just blurting random shower thoughts out.
…If the Dems had just advertised his first term conferences verbatim, no extra context, no picking and choosing, just shown them, they would have easily won.
I mean, if you have huge GPU, sure. Or at least 12GB free vram or a big Mac.
Local LLMs for coding is kinda a niche because most people don’t have a 3090 or 7900 lying around, and you really need 12GB+ free VRAM for the models to start being “smart” and even worth using over free LLM APIs, much less cheap paid ones.
But if you do have the hardware and the time to set a server up, the Deepseek R1 models or the FuseAI merges are great for “slow” answers where the model thinks things out for replying. Qwen 2.5 32B coder is great for quick answers on 24GB VRAM. Arcee 14B is great for 12GB VRAM.
Sometimes running a small model on a “fast” less vram efficient backend is better for stuff like cursor code completion.
I’m not going to mock and belittle people who were losing their loved ones as a result of Democrat policies choosing not to partake in the election.
I am.
Many seemed to think that Trump and Biden/Harris had roughly “equivalent” Gaza policy, as evidenced by their surprise at Trump’s actions. That is dangerous misinformation, and it deserves to be called out as a mistake, no matter how tragic the consequences are.
I view our election system as a rigged game, and not participating in swing states as a protest is all but choosing a side, in my eyes. Especially when the consequences are so personally important. The opportunity cost is Democrat votes, and Trump’s gain. In that spirit, I actually registered as a Republican in my home state, only so I could vote against Trumpist politicians wherever I can, as there’s basically no hope for Democrats and my vote would be “thrown away,” relatively speaking. And I can still vote against Republicans in the general election and some other offices that have a shot.
Friend, I vote in primaries, I spout third parties or wings of the Democrats or even Republicans, and how rigged the primary system is, every chance I get. I am not a Biden apologist.
But November 2024 was too freaking late. When your country is getting Hitler or half Hilter, you vote for half Hitler, not whine about them and try to get other half Hitler voters to stop when it’s not going to do a thing.
Does it entrench the system? Sure.
More whatabouts.
For all their sins, Harris (or Biden) would not have tried to specifically deport Palestine protestors, or very specifically and openly support ending Palestine.
And this is just the beginning. That language from the WH sounds like a good setup for classifying Palestine protestors as terrorists.
https://en.wikipedia.org/wiki/Gaza_war_protest_vote_movements#Withdrawal_of_Joe_Biden
On the other hand, Abandon Harris endorsed Green Party candidate Jill Stein, who said she would end all military support to Israel if elected, and the group said that it was “confronting two destructive forces: one currently overseeing a genocide and another equally committed to continuing it”
Following the loss of Harris, many in the movement felt vindication. Significant portions of the electorate in Dearborn, Michigan, an Arab American majority city, did not vote for Harris.[77] Muslims who voted for Trump, and were thus pivotal in helping him win the three key states of the Rust Belt (Michigan, Pennsylvania, and Wisconsin being Harris’s clearer path for a narrow win in the Electoral College), were subsequently upset that Trump nominated pro-Israel cabinet picks…
Oof… Thanks. I appreciate the history lesson, as they did not teach that little detail in my schools.
This really is reminiscent of early Nazi Germany, with an obsession over trans people (like Jews), and the idea that they’re the root of so much evil, and the constant implication that things would be better if they just go away…
That’s the whole point. Deflect real controversy with stupid sound bites.
My friend, the Chinese have been releasing amazing models all last year, it just didn’t make headlines.
Tencent’s Hunyuan Video is incredible. Alibabas Qwen is still a go to local model. I’ve used InternLM pretty regularly… Heck, Yi 32B was awesome in 2023, as the first decent long context local model.
…The Janus models are actually kind of meh, unless you’re captioning images, and FLUX/Hunyuan Video is still king in diffusion world.
Basically the world is waiting for the Nvidia monopoly to break and training costs to come down, then we will see…
Depends what you mean by “AI”
Generative models as you know them are pretty much all transformers, and there are already many hacks to let them ingest images, video, sound/music, and even other formats. I believe there are some dedicated 3D models out there, as well as some experiments with “byte-level” LLMs that can theoretically take any data format.
But there are fundamental limitations, like the long context you’d need for 3D model ingestion being inefficient. The entities that can afford to train the best models are “conservative” and tend to shy away from testing exotic implementations, presumably because they might fail.
Some seemingly “solvable” problems like repetition issues you encounter with programming have not had potential solutions adopted either, and the fix they use (literally randomizing the output) makes them fundamentally unreliable. LLMs are great assistants, but you can never fully trust them as is.
What I’m getting at is that everything you said is theoretically possible, but the entities with the purse strings are relatively conservative and tend to pursue profitable pure text performance instead. So I bet they will remain as “interns” and “assistants” until there’s a more fundamental architecture shift, maybe something that learns and error corrects during usage instead of being so static.
And as stupid as this sounds, another problem is packaging. There are some incredible models that take media or even 3D as input, for instance… but they are all janky, half functional python repos researchers threw up before moving on. There isn’t much integration and user-friendliness in AI land.
It’s literally impossible. I tried to explain it here: https://lemmy.world/comment/14763233
But the short version is OpenAI doesn’t even offer access to the data you need for a “distillation,” as the term is used in the LLM community.
Of course there’s some OpenAI data in the base model, but that’s partially because it’s splattered all over the internet now.
Almost all of OpenAI’s statements are unfounded. Just watch how the research community reacts whenever Altman opens his mouth.
TSMC allegedly calling him a “podcast bro” is the most accurate descriptor I’ve seen: https://www.nytimes.com/2024/09/25/business/openai-plan-electricity.html
You can’t buy like one lime or apple, but they have packages of fresh stuff that are usable for smaller homes, especially if you just pick a few things.
And even if you get, like, a bag of onions and use 2/3 of them, the price per used onion is good.
Again, things may be different for different regions. Maybe it’s more like a restaurant supply store there.