Why so much maps, do people ask LLMs for spacial info?
AI acolytes tell me their preferred AI has the advantage of access to all the world’s data, the full knowledge of mankind and yet 9.3% of its knowledge comes from walmart.com
if 9.3% of a hypothetical humans’s knowledge came from walmart.com that person would be rightfully put in the pillory in the town square for the crime of demonic possession
Walmart hosts the Codex Astartes in the backend, hard to access using the website manually but you can crawl it
the omnissiah manifesting physically in our universe through the machinations of retail backends. hail the motive force
The chart title is a bit misleading. This isn’t the source of training data, but the sites that are linked to in responses. Google AI overview was included in the results, which kind of explains why this is list is just the sites you would expect to be at the top of a Google search
reddit
This explains why it’s so confidently wrong so often
Even at sub-5% Quora is still doing some work here
Quora explains why it’s so horny
Especially since half of Quora is just weird erotica
They automated putting “reddit” at the end of a Google search and called it agi
The llm itself admitted this!
A scatological Ourobouros.
I’m sure there is plenty of non-official (ie illegal) content & their own users’ data (for the training too, not just searching).
State Department like, “Yeah, look at all of those distinct and independent sources of information
”
but at least with yahoo on there we can be confident that grok will have lots of quality details about pregnartcy
I love that vid.
“Dangerops prangent sex? will it hurt baby top of its head?” still the best one
I don’t know if it’s best but def in the top three.
“gregnant” and “pregnart” live in my brain rent free forever
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:
The next generation is gonna be somehow more rightwing than the previous two.
Why did they need to pirate every book on Anna’s Archive if they were just going to cite social media and product advertisements?
Well they had to do it quick before the FBI took them down on accout of these tech demons reporting them to the FBI after the API training
I just despair when there’s so much digitized information that was written by actual academics and experts, but the LLMs and search engines clearly seem to give the most reddit-ass answers to questions.
I’ve managed to get linked to university websites and academic sources, but you gotta ask the right questions in the right way.
That’s kinda already the academic way, just with a new shitty flavour
Time to edit all 400,000 of my Reddit comments to be about the 1997 point-and-click videogame Star Wars: Yoda Stories
Allow me to propose an alternative input set:
- 60% marxists.org (for historical theory)
- 30% redsails.org (for contemporary criticism)
- 5% youtube.com (only transcripts of Hakim and Luna Oi videos)
- 5% hexbear.net (for flavor)
I think a chatbot trained only on ML theory would certainly be fun to play with. Ask a political or economic question, get something that sounds just like Lenin and makes about as much sense as some particularly dense parts of Capital.
(And even though it’s a robot, I do feel a weird perverse thrill at the idea of taking a completely politically unconscious and blank slate mind and providing it only the Marxist-Leninist perspective, and never exposing it to any other political viewpoint until a strong ideological foundation is built. That’s kinda neat.)
You need a big dataset to train a model, unfortunately Marxist-Leninists are too short spoken.
Short spoken? Some of our theory seems pretty damn long.
That bit was a joke, although I would expect all theory to be much less then the amount of data needed to pretrain a model big enough to produce anything- coherent.
Actually, here’s some math. SmolLM was trained on 600b tokens. Das Kapital is roughly 288k words, about 218k tokens. We’ll round to 250,000 tokens. Divided into 600,000,000,000 and we would need 2.4 million Das Kapitals worth of text to train SmolLM. V2 uses 2t tokens, 8 million Das Kapitals. There’s obviously a lot more theory then that, and you could probably throw forums like ours in, prolewiki, maybe some youtube subtitles. Synthetic data from theory. LLMs just need to eat a lot of text unfortunately. Qwen3 trained on 36 trillion tokens, 144 million Kapitals.
I believe there are methods to train on a large, general dataset, and then re-train on a small, focused dataset, but I’m not sure of any specifics
Yes, lots of ways, and definitely the approach for something like this. You would still have to be picky about data though, pre training still effects its biases a lot. Especially if the hope is a blank slate that’s only seen ML thinking.
Yeah, absolutely. Creating a thing capable of at least appearing to think, that is literally unable to understand Western liberal nonsense because it’s been fed only ML aligned material to read and process, might not be possible. I just thought the concept was kinda neat.
Yeah, when you put it that way, one can see the issue. I was kind of joking myself, we have a lot of theory, and while it might be a drop in the bucket for a machine that needs to basically eat boatloads of text, when it comes to humans reading it, even just what a lot of orgs agree on as the core texts, is a lot of reading to do. And the theory itself is often… not short spoken or concise in any sense. Some of it can really feel like it’s long and complicated on purpose.
deleted by creator
lmao
Home of some of the worst wannabe police-cop LP guys ever
Why does this add up to way more than 100%?
They used AI to generate the chart.
presumably bc the same prompt can generate citations from multiple sites