

I guess those scientist guys all working on A.I. never gave cocaine and Monster Energy a try.


I guess those scientist guys all working on A.I. never gave cocaine and Monster Energy a try.


If you’re looking for AI-generated anti-AI music, we’ve got that (mildly NSFW).


♫I couldn’t wait for you to come and clear the cupboards…
Fun Fact: Artax can speak in the novel.


Ctrl+f “attractor state” to find the section. They named it “spiritual bliss.”


DeepMind keeps trying to build a model architecture that can continue to learn after training, first with the Titans paper and most recently with Nested Learning. It’s promising research, but they have yet to scale their “HOPE” model to larger sizes. And with as much incentive as there is to hype this stuff, I’ll believe it when I see it.


Everyone seems to be tracking on the causes of similarity in training sets (and that’s the main reason), so I’ll offer a few other factors. System prompts use similar sections for post-training alignment. Once something has proven useful, some version of it ends up in every model’s system prompt.
Another possibility is that there are features of the semantic space of language itself that act as attractors. They demonstrated and poorly named an ontological attractor state in the Claude model card that is commonly reported in other models.
If the temperature of the model is set low, it is less likely to generate a nonsense response, but it also makes it less likely to come up with an interesting or original name. Models tend to be mid/low temp by default, though there’s some work being done on dynamic temperature.
The tokenization process probably has some effect for cases like naming in particular, since common names like Jennifer are a single token, something like Anderson is 2 tokens, and a more unique name would need to combine more tokens in ways that are probably less likely.
Quantization decreases lexical diversty, and is relatively uniform across models. Though not all models are quantized. Similarities in RLHF implementation probably also have an effect.
And then there’s prompt variety. There may be enough similarity in the way in which a question/prompt is usually worded that the range of responses is restrained. Some models will give more interesting responses if the prompt barely makes sense or is in 31337 5P34K, a common method to get around alignment.

Good observation. One of the protections from this kind of appropriation is peer review (in every sense). As the amount of information available for review has shot up, the available time of peers capable of reviewing it has not risen to compensate. Additionally, we’re developing more and more ways to compromise peer review itself. I think this is (in part) to blame for the rise of fascist elements in the world. This pressure (combined with the ratcheting up of climate change realities) has put humanity in a precarious position.
I didn’t say they were. I said A.I. is. The delay makes remote control difficult at long distances. Hence, AutoNav.


I’m super curious about the small text underneath.


![]()
I’m embarrassed I skimmed right over that on a first glance. Looks like the original was hematite. I’m going to pretend I was making a bad LotR joke.
Ah, I’d never heard that. Found the image and it appears to have originally been hematite.
I’m confused by the matte out. Is it to anonymize the ring? Something written on it, maybe?
Aliens


Empathize as in understand motivations and perspectives: 8
With some effort to communicate, I can usually understand how someone got where they are. It’s important to me to understand as many ways of being as possible. It’s my job to understand people, but the bigger motivation is that it bugs me if I don’t understand the root of a disagreement. Of course, this doesn’t mean I condone their perspective, believe it’s healthy/logical, or would recommend it wholesale to others.
Just pointing out the definition of A.I. that I am using in this context.
GPTs are based on a deep learning architecture called the transformer. Deep learning is a subset of machine learning, which is itself a subset of artificial intelligence. -Wikipedia
claude.ai for text, krea.ai for media, and huggingface.co for everything else. Until the tides change again, anyway. The wave of enshittification is ever swelling.