On the contrary, I’d argue that its entirely feasible to create an artificial intelligence. “All” you need do is replicate the concept of thought - which is a never ending train of relational contexts that are entirely dependent on the individuals life experiences. Putting that into practise is a huge job, but arguably not an impossible one. Such a creation, presuming it could create new concepts along the way, would certainly be deserving of the title “AI”.
The issue is your interpretation is at best an hypothsis. Not a fact. And the only way to prove your hypothsis is to simulate the thought you wish to create.
Others have not managed it yet. But you may be the first. Personally i am not sold on the idea. Bur would love to see you prove me wrong.
Artificial merely implies manmade, as opposed to naturally developed IMO.
As for the hypothesis, a few years ago I took a crack at designing a system like that as an on-paper exercise. The vast majority of it was just…pushing data around and using existing data to suggest new data. Not all that dissimilar to how human beings think, to be honest. The big hurdle was optimisation and context, and allowing the platform to “grow” without letting it metastasize and without improperly restricting it. There are some hardware limitations to consider too - a storage backbone, for one, and interlinking every thread as opposed to having them wholly isolated from each other. There’s the potential for thread interruption too, which as far as I’m aware is not something that any microcode packages support.
But despite all that, I’m still fairly certain one could build an approximation therein. The complexity of inter-stimuli input (read: input from audio, visual, and potentially sensatory endpoints, replicating vision, hearing and touch) isn’t to be underestimated, though.
Perhaps one day I might take a crack at it - but its also a morally gray area that has quite a few caveats to it, so… uh… maybe.
Artificial merely implies manmade, as opposed to naturally developed IMO.
Yeah but we do not use it for that anywhere else. Everywhere else we use artificial it is referring to something that dose not contain the original product. And implies something lesser.
When talking about intelligence we use artificial in a unique way to describe something digital or created. And honestly. You better hope emotion is never a part of that creation.
As for you definition of its how humans think. Sorry but you do not know that. It is the very hypothesis I was claiming you need to find a way to test.
As I say we have lots of ideas / hypotheses on human and animal thought. Bot absolutely nothing that would move such into the relms of a theory. As of yet. We are not even sure how to test most of those hypothesis. All we do is measure neurons electrical and chemical transfer. We are a very very long way from tieing that to any process of original thought or generation of ideas.
As I say. Id love it if you were proven correct. But ATM we don’t even know how to proove you or anyone else wrong on this subject.
On the contrary, I’d argue that its entirely feasible to create an artificial intelligence. “All” you need do is replicate the concept of thought - which is a never ending train of relational contexts that are entirely dependent on the individuals life experiences. Putting that into practise is a huge job, but arguably not an impossible one. Such a creation, presuming it could create new concepts along the way, would certainly be deserving of the title “AI”.
Have fun.
The issue is your interpretation is at best an hypothsis. Not a fact. And the only way to prove your hypothsis is to simulate the thought you wish to create.
Others have not managed it yet. But you may be the first. Personally i am not sold on the idea. Bur would love to see you prove me wrong.
That is after all the point of science.
But linguistics wise.
How is that intelegence artificial?
Artificial merely implies manmade, as opposed to naturally developed IMO.
As for the hypothesis, a few years ago I took a crack at designing a system like that as an on-paper exercise. The vast majority of it was just…pushing data around and using existing data to suggest new data. Not all that dissimilar to how human beings think, to be honest. The big hurdle was optimisation and context, and allowing the platform to “grow” without letting it metastasize and without improperly restricting it. There are some hardware limitations to consider too - a storage backbone, for one, and interlinking every thread as opposed to having them wholly isolated from each other. There’s the potential for thread interruption too, which as far as I’m aware is not something that any microcode packages support.
But despite all that, I’m still fairly certain one could build an approximation therein. The complexity of inter-stimuli input (read: input from audio, visual, and potentially sensatory endpoints, replicating vision, hearing and touch) isn’t to be underestimated, though.
Perhaps one day I might take a crack at it - but its also a morally gray area that has quite a few caveats to it, so… uh… maybe.
Yeah but we do not use it for that anywhere else. Everywhere else we use artificial it is referring to something that dose not contain the original product. And implies something lesser.
When talking about intelligence we use artificial in a unique way to describe something digital or created. And honestly. You better hope emotion is never a part of that creation.
As for you definition of its how humans think. Sorry but you do not know that. It is the very hypothesis I was claiming you need to find a way to test.
As I say we have lots of ideas / hypotheses on human and animal thought. Bot absolutely nothing that would move such into the relms of a theory. As of yet. We are not even sure how to test most of those hypothesis. All we do is measure neurons electrical and chemical transfer. We are a very very long way from tieing that to any process of original thought or generation of ideas.
As I say. Id love it if you were proven correct. But ATM we don’t even know how to proove you or anyone else wrong on this subject.