Clip art.
That is, “art” that’s intended to be meaningless until someone else uses it in a context that supplies a meaning.
Clip art.
That is, “art” that’s intended to be meaningless until someone else uses it in a context that supplies a meaning.
Anathem really benefits from having some background familiarity with western philosophy, and Plato’s theory of Forms/Ideas in particular. If you’re fuzzy on that, you might want to do a quick review before you get too far into the book.
So he’s accepting personal responsibility?
“The monkey about whose ability to see my ears I’m wondering”.
Part of the issue is that the thing you’re wondering about needs to be a noun, but the verb “can” doesn’t have an infinitive or gerund form (that is, there’s no purely grammatical way to convert it to a noun, like *“to can” or *“canning”). We generally substitute some form of “to be able to”, but it’s not something our brain does automatically.
Also, there’s an implied pragmatic context that some of the other comments seem to be overlooking:
The speaker is apparently replying to a question asking them to indicate one monkey out of several possibilities
The other party is already aware of the speaker’s doubts about a particular monkey’s ear-seeing ability
The reason this doubt is being mentioned now is to identify the monkey, not to declare the doubt.
I don’t think it’s useful for a lot of what it’s being promoted for—its pushers are exploiting the common conception of software as a process whose behavior is rigidly constrained and can be trusted to operate within those constraints, but this isn’t generally true for machine learning.
I think it sheds some new light on human brain functioning, but only reproduces a specific aspect of the brain—namely, the salience network (i.e., the part of our brain that builds a predictive model of our environment and alerts us when the unexpected happens). This can be useful for picking up on subtle correlations our conscious brains would miss—but those who think it can be incrementally enhanced into reproducing the entire brain (or even the part of the brain we would properly call consciousness) are mistaken.
Building on the above, I think generative models imitate the part of our subconscious that tries to “fill in the banks” when we see or hear something ambiguous, not the part that deliberately creates meaningful things from scratch. So I don’t think it’s a real threat to the creative professions. I think they should be prevented from generating works that would be considered infringing if they were produced by humans, but not from training on copyrighted works that a human would be permitted to see or hear and be affected by.
I think the parties claiming that AI needs to be prevented from falling into “the wrong hands” are themselves the most likely parties to abuse it. I think it’s safest when it’s open, accessible, and unconcentrated.
In some cases, assuming the poster is a troll is the more charitable alternative.
Right—by “take it down” I just meant take down online access to their own running instance of it.
DeepSeek’s specific trained model is immaterial—they could take it down tomorrow and never provide access again, and the damage to OpenAI’s business would already be done.
DeepSeek’s model is just a proof-of-concept—the point is that any organization with a few million dollars and some (hopefully less-problematical) training data can now make their own model competitive with OpenAI’s.
Molotov cocktails are an odd choice of weapon for a targeted assassination.
Consider Phlebas isn’t really characteristic of the Culture series as a whole—don’t hesitate to start somewhere else if you tried Phlebas before and it didn’t hook you. (They all work as stand-alone novels, with just a few tangential recurring characters.)
Anyone using DeepSeek as a service the same way proprietary LLMs like ChatGPT are used is missing the point. The game-changer isn’t that a Chinese company like DeepSeek can compete with OpenAI and its ilk—it’s that, thanks to DeepSeek, any organization with a few million dollars to train and host their own model can now compete with OpenAI.
I don’t think it would be illegal as long as a similar human-created work would be legal (i.e., it doesn’t use trademarked characters or otherwise infringe on other works). But your publisher and/or readers might object.
Video of censored answers show R1 beginning to give a valid answer, then deleting the answer and saying the question is outside its scope. That suggests the censorship isn’t in the training data but in some post-processing filter.
But even if the censorship were at the training level, the whole buzz about R1 is how cheap it is to train. Making the off-the-self version so obviously constrained is practically begging other organizations to train their own.
Making the censorship blatantly obvious while simultaneously releasing the model as open source feels a bit like malicious compliance.
I get that the data isn’t final yet, but surely they could have found a way to word that without referring to 2024 in the future tense. Like “US‘s wind and solar likely generated more power than coal in 2024”.
More like, there was a brief window when the growth of the accessibility of information outpaced the growth of our ability to abuse it.
I’m not sure if Nvidia’s investors are accounting for Jevons paradox.
Probably more of an AI-infrastructure-bubble collapse.
To use the old Gold Rush analogy—it’s like investors assumed the real money would be in selling pickaxes, but the miners just discovered they don’t actually need them.
Thanks—that’s a great overview!
I guess what I’m hoping for are sources that make a clearer distinction between the case that the teleological grand-narrative view of history is a harmful product of our social and political biases, and the case that it’s objectively wrong.
I believe that both are true, but I want to keep the arguments separate in my head.
As is “which”.