Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

    • CannotSleep420@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      25
      ·
      1 year ago

      One doesn’t need to assert the existence of an immaterial soul to point out that the mechanisms that lead to consciousness are different enough from the mechanisms that make computers work that the former can’t just be reduced to an ultra complex form of the latter.

        • CannotSleep420@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          17
          ·
          1 year ago

          In what way does consciousness resemble an ultra complex computer? Nobody has consciousness fully figured out of course, but I would at least expect there to be some relevant parallel between computer hardware and brain hardware if this is the case.

          • drhead [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            12
            ·
            1 year ago

            What stops me from doing the same thing that neurons do with a sufficiently sized hunk of silicon? Assuming that some amount of abstraction is fine.

            If the answer is “nothing”, then that demonstrates the point. If you can build an artificial brain, that does all of the things a brain does, then there is nothing special about our brains.

            • Egon [they/them]@hexbear.net
              link
              fedilink
              English
              arrow-up
              10
              ·
              edit-2
              1 year ago

              But can you actually build an artificial brain with a hunk of silicon? We don’t know enough about brains or consciousness to do that, so the point is kinda moot

            • CannotSleep420@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 year ago

              I probably should have worded my last reply differently, because modeling the human brain with boolean circuits and turing machines is mainly what I have an issue with. While I’m not particularly knowledgable on the brain side of things, I can see the resemblance between neurons and logic gates. However, my contention is that the material constraints of how those processes are embodied are going to have a significant effect on how the system works (not to say that you were erasing this effect entirely).

              I want to say more on the topic, but now that my mind is on it I want to put some time and effort into explaining my thoughts in its own post. I’ll @ you in a reply if/when I make the post.

              • Saeculum [he/him, comrade/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                However, my contention is that the material constraints of how those processes are embodied are going to have a significant effect on how the system works

                Sure, but that’s no basis to think that a group of logic gates could not eventually be made to emulate a neuron. The neuron has a finite number of things it can do because of the same material constraints, and while one would probably end up larger than the other, increasing the physical distances between the thinking parts, that would surely only limit the speed of an emulated thought rather than its substance?

            • silent_water [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              it still remains to be proved that consciousness can be emulated on a Turing machine. that’s a huge open problem. you can assume it’s true but your results are contingent.

        • WideningGyro [any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I zoned out on the consciousness debate around 2015, so forgive me if this stuff is now considered outdated, but as I recall those materialist theories of consciousness all run into the hard problem, right? I might be biased in one direction, but I feel like the fact that computational models can’t account for lived experience is a pretty good argument against them. Wouldn’t it just be more accurate to say that we’re missing a good theory of consciousness, at all?

    • VILenin [he/him]@hexbear.netOPM
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 year ago

      Nobody ever mentioned a “soul” in this conversation until you brought it up to use as an accusation.

      “Computers aren’t sentient” is not a religious belief no matter how hard you try to smear it as such.

        • VILenin [he/him]@hexbear.netOPM
          link
          fedilink
          English
          arrow-up
          14
          ·
          1 year ago

          The claim is that “computers can be sentient”. That is a strong claim and requires equally strong evidence. I’ve found the arguments in support of it lackluster and reductionist for reasons I’ve outlined in other comments. In fact, I find the idea that if we compute hard enough we get sentience borders on a religious belief in extra-physical properties being bestowed upon physical objects once they pass a certain threshold.

          There are people who argue that everything is conscious, even rocks, because everything is ultimately a mechanical process. The base argument is the same, but I have a feeling that most people here would suddenly disagree with them for some reason. Is it “creationism” to find such a hypothesis absurd, or is it vulgar materialism to think it’s correct? You seem to take offense at being called “reductionist” despite engaging in a textbook case of reductionism.

          This doesn’t mean you’re wrong, or that the rock-consciousness people are wrong, it’s just an observation. Any meaningful debate about sentience right now is going to be philosophical. If you want to be scientific the answer is “I don’t know”. I don’t pretend to equate philosophy with science.

            • VILenin [he/him]@hexbear.netOPM
              link
              fedilink
              English
              arrow-up
              11
              ·
              1 year ago

              I think that AI will get better but it’s “base” will remain the same. Going deeper to understand the mechanisms is different than just going “it’s a mechanism”, which I see a lot of people doing. I think computers can very easily replicate human behaviors and emulate emotions.

              Obviously creating something sentient is possible since brains evolved. And if we don’t kill ourselves I think it’s very possible that we’ll get there. But I think it will be very different to what we think of as a “computer” and the only similarities they might share could be being electrically powered.

              At the end of the road we’ll just get to arguing about philosophical zombies and the discussion usually wraps up there.

              I’d be very happy if it turned out that I’m completely wrong.

      • Saeculum [he/him, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        By that way of reasoning, the replicates aren’t people because they are characters written by the author same as any other.

        They are as much fiction as sentient machines are science fiction.

        • usernamesaredifficul [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          14
          ·
          1 year ago

          ok sure my point was the authors aren’t making a point about the nature of machines informed by the limits of machines and aren’t qualified to do so

          saying AI is people because of Data from star trek is like saying there are aliens because you saw a Vulcan on tv in terms of relevance

          • Saeculum [he/him, comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            That’s fair, though taking the idea that AI is people because of Data from Star Trek isn’t inherently absurd. If a machine existed that demonstrated all the capabilities and external phenomena as Data in real life, I would want it treated as a person.

            The authors might be delusional about the capabilities of their machine in particular, but in different physical circumstances to what’s most likely happening here, they wouldn’t be wrong.

            • DamarcusArt@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Sorry to respond to this several day old comment, but I think there were quite a few episodes where Data’s personhood was directly called into question, it is a tangential point, but I think it is likely that even if we had a robotic Brent Spiner running around, people might still not be 100% convinced that they are truly sapient, and might consider it an incredibly complex mechanical Turk style trick. It really is hard to tell for sure, even if we did have a “living” AI to examine.