The undefeated argument for explaining it to laypeople is to show just how “linear” the process for an LLM is compared to human thought. When you prompt the LLM, all it ever does is it takes your input, turns it into a sequence of mathematical objects, then it puts them through a really long chain of matrix multiplications that lands on an output that gets converted back into language. At no point does it have branches where it takes some time to introspect, consider, recall, or reflect on anything the way a human does when we receive a question. It’s not thinking.
I’ve taken to calling them “synths” because what is it doing that’s fundamentally different from a 1980’s CASIO? A simple input is returning a complex output? waow
Honestly I think if the term “cybernetics” had won over “artificial intelligence” there’d be less of this obfuscation. But “AI” is more marketable, and of course that’s all that matters.
i don’t want to argue w/ people all day but it was a joke
Ultimately every answer will only ever be an approximation, but there will never be any certainty to its correctness.
sounds like pretty much any and all thinking to me, people don’t “know” things, they think they know things. usually they’re right, but memory is weird shit and doesn’t always work properly and there are ten billion and one factors that can influence a person’s recollection of some bit of information. i was like “woah the magic conch is just like me fr fr”
p.s. I do wanna argue though that while i don’t think chatgpt thinks, I do think that consciousness is an emergent property and with enough things like chatgpt all jumbled together you might see something resembling consciousness or thought, at least in a way that if you really interrogate it closely enough you might not be able to meaningfully differentiate it from biological consciousness or thought (which if you really wanna argue could also be reduced to “it’s just math” as well, just math that is way beyond the ability of people to determine. I mean if you had magical deterministic information of the position and interaction of every neuron and neurochemical and every related cellular process etc and could map out and understand it you could look at it and shrug and go “it’s just math” too, j/s doggggggggg)
this is where I’d press a disable inbox reply button IF I HAD IT
you really interrogate it closely enough you might not be able to meaningfully differentiate it from biological consciousness or thought (which if you really wanna argue could also be reduced to “it’s just math” as well, just math that is way beyond the ability of people to determine
Here’s one easy way to differentiate it: my brain is wet and runs on electrochemical processes powered by food. Is that a “significant” difference? That depends on what you think is worth tracking! Defining what counts as “functionally identical” requires you decide which features of a system are “functional” and which are “mere” cosmetic differences. That differentiation isn’t given to us by nature, though, and already reflects a hefty series of evaluative judgements. By carefully defining our functions, we can call any two things “functionally identical.” There’s no right answer, which is both a strength and a limitation of this kind of functionalist framework. Both the AI boosters and the AI “impossibilists” miss this point: functional identity is perspectival, and encodes a bunch of evaluative assumptions about which differences do and don’t matter. That’s ok–all model building does that–but it’s important not to confuse the map and the territory, or think we’re identifying some kind of value-independent feature of the world when we attribute functional identity.
you have running inner monologue, sure, but when you solve something like how many b’s in blackberry, do you honestly say you thinking in words about a problem?
you have concepts/ideas/pictures/words/signs/symbols wheezing by, that are not embodied in words until desired to. And until you engage in rechecking/reflecting, i don’t think it’s very likely this thinking is in language, more like you can interpret flashes of thoughts into words if you decide to dwell on them, but are not necessitated to do so, and i don’t think ordinary engagement with imagination requires language. (could have swore i linked some article related to math/language/fmri, that shown ideas (math in that case) thinking is not exactly located in language areas of brain)
look i’m not a linguist so i’m not going to make the proper argument here but the defining features of our type of human are the specific adaptations for language, how people behave is culturally defined and culture is understood and communicated through language.
frankly likening the experience of sensations to knowledge of them without language sounds very silly to me.
reducing ideas to sensations is some sensualist reductivism (sensations is what we get from outside world from our sensory organs, thoughts is your brain stuff doing something), i can do math or imagine things without inner voice vocalizing it, unless language comprehension area of a brain is lowkey involved in this. Of course higher order thinking, reflections/comparisons start to slow down and you can start to employ language inside to hold an idea for some time more. (i am language of thought simp i guess)
Language is a medium of transmission of ideas (to another implied person), not medium of ideas itself, you can have an idea without language, you cannot have language without ideas, as it would be just bunch of non-sense (as in - not carrying any sense). (as an aside, social conformity can be transmitted by body language perfectly well).
i’m not trying to do reductivism, -this is admittedly outside my expertise- i simply don’t understand how you square this concept of ideas existing outside language when that’s inexpressable without language.
as an aside, social conformity can be transmitted by body language perfectly well
i hope i didn’t make it sound like verbal speech was the key here, the muscle and bone adaptions that make complex speech possible were accompanied by brain stuff. people with disabilities making some forms of language inaccessible still use language!
I think chatgpt is basically like a computer equivalent of figuring out language processing to an alright degree which is like p. cool and I guess enough to trick people into thinking the mechanical turk has an agenda but yeah still not thinking
i guess my issue is that neural networks as they exist now can’t emerge property, they are fitting to data to predict next word in the best way possible, or most probable in unknown sentence. It’s not how anybody learns, not mice, not humans.
Something akin to experiments with free floating robot arms with bolted on computer vision seem like much more viable approach, but there the problem is they don’t have right architecture to feed it into, at least i don’t think they do, and even then it will probably will stall out for a time at animal level.
my problem is at some point they’re gonna smoosh chatgpt and that sort of stuff and other shit together and it might be approximating consciousness but nerds will be like "it’s just math! " and it’ll make commander Data sad n’ they won’t even care
well of course they could, flawless imitation of consciousness, after all, is the same as consciousness (aside from morality, which will be unknowable), just not here at the moment
tbh that kinda sounds like it’s “thinking” though, just that it’s not very good at it at all
That’s the easiest way to describe it to people, but it isn’t. It’s just math doing this.
The undefeated argument for explaining it to laypeople is to show just how “linear” the process for an LLM is compared to human thought. When you prompt the LLM, all it ever does is it takes your input, turns it into a sequence of mathematical objects, then it puts them through a really long chain of matrix multiplications that lands on an output that gets converted back into language. At no point does it have branches where it takes some time to introspect, consider, recall, or reflect on anything the way a human does when we receive a question. It’s not thinking.
I’ve taken to calling them “synths” because what is it doing that’s fundamentally different from a 1980’s CASIO? A simple input is returning a complex output? waow
Honestly I think if the term “cybernetics” had won over “artificial intelligence” there’d be less of this obfuscation. But “AI” is more marketable, and of course that’s all that matters.
Gippity, the technical term is gippity.
i don’t want to argue w/ people all day but it was a joke
sounds like pretty much any and all thinking to me, people don’t “know” things, they think they know things. usually they’re right, but memory is weird shit and doesn’t always work properly and there are ten billion and one factors that can influence a person’s recollection of some bit of information. i was like “woah the magic conch is just like me fr fr”
p.s. I do wanna argue though that while i don’t think chatgpt thinks, I do think that consciousness is an emergent property and with enough things like chatgpt all jumbled together you might see something resembling consciousness or thought, at least in a way that if you really interrogate it closely enough you might not be able to meaningfully differentiate it from biological consciousness or thought (which if you really wanna argue could also be reduced to “it’s just math” as well, just math that is way beyond the ability of people to determine. I mean if you had magical deterministic information of the position and interaction of every neuron and neurochemical and every related cellular process etc and could map out and understand it you could look at it and shrug and go “it’s just math” too, j/s doggggggggg)
this is where I’d press a disable inbox reply button IF I HAD IT
Here’s one easy way to differentiate it: my brain is wet and runs on electrochemical processes powered by food. Is that a “significant” difference? That depends on what you think is worth tracking! Defining what counts as “functionally identical” requires you decide which features of a system are “functional” and which are “mere” cosmetic differences. That differentiation isn’t given to us by nature, though, and already reflects a hefty series of evaluative judgements. By carefully defining our functions, we can call any two things “functionally identical.” There’s no right answer, which is both a strength and a limitation of this kind of functionalist framework. Both the AI boosters and the AI “impossibilists” miss this point: functional identity is perspectival, and encodes a bunch of evaluative assumptions about which differences do and don’t matter. That’s ok–all model building does that–but it’s important not to confuse the map and the territory, or think we’re identifying some kind of value-independent feature of the world when we attribute functional identity.
you don’t think in language and words tho
am i missing this being sarcastic in a way because people do think in language and words
you have running inner monologue, sure, but when you solve something like how many b’s in blackberry, do you honestly say you thinking in words about a problem?
you have concepts/ideas/pictures/words/signs/symbols wheezing by, that are not embodied in words until desired to. And until you engage in rechecking/reflecting, i don’t think it’s very likely this thinking is in language, more like you can interpret flashes of thoughts into words if you decide to dwell on them, but are not necessitated to do so, and i don’t think ordinary engagement with imagination requires language. (could have swore i linked some article related to math/language/fmri, that shown ideas (math in that case) thinking is not exactly located in language areas of brain)
look i’m not a linguist so i’m not going to make the proper argument here but the defining features of our type of human are the specific adaptations for language, how people behave is culturally defined and culture is understood and communicated through language.
frankly likening the experience of sensations to knowledge of them without language sounds very silly to me.
reducing ideas to sensations is some sensualist reductivism (sensations is what we get from outside world from our sensory organs, thoughts is your brain stuff doing something), i can do math or imagine things without inner voice vocalizing it, unless language comprehension area of a brain is lowkey involved in this. Of course higher order thinking, reflections/comparisons start to slow down and you can start to employ language inside to hold an idea for some time more. (i am language of thought simp i guess)
Language is a medium of transmission of ideas (to another implied person), not medium of ideas itself, you can have an idea without language, you cannot have language without ideas, as it would be just bunch of non-sense (as in - not carrying any sense). (as an aside, social conformity can be transmitted by body language perfectly well).
i’m not trying to do reductivism, -this is admittedly outside my expertise- i simply don’t understand how you square this concept of ideas existing outside language when that’s inexpressable without language.
i hope i didn’t make it sound like verbal speech was the key here, the muscle and bone adaptions that make complex speech possible were accompanied by brain stuff. people with disabilities making some forms of language inaccessible still use language!
I think Helen Keller has some writings on her experience learning language as an adult(?) that strengthen your point.
neither does the computer!!!
I think chatgpt is basically like a computer equivalent of figuring out language processing to an alright degree which is like p. cool and I guess enough to trick people into thinking the mechanical turk has an agenda but yeah still not thinking
i guess my issue is that neural networks as they exist now can’t emerge property, they are fitting to data to predict next word in the best way possible, or most probable in unknown sentence. It’s not how anybody learns, not mice, not humans.
Something akin to experiments with free floating robot arms with bolted on computer vision seem like much more viable approach, but there the problem is they don’t have right architecture to feed it into, at least i don’t think they do, and even then it will probably will stall out for a time at animal level.
my problem is at some point they’re gonna smoosh chatgpt and that sort of stuff and other shit together and it might be approximating consciousness but nerds will be like "it’s just math!
" and it’ll make commander Data sad
n’ they won’t even care
well of course they could, flawless imitation of consciousness, after all, is the same as consciousness (aside from morality, which will be unknowable), just not here at the moment
A mechanical turk is a fake AI with a human behind it