• Cyteseer@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    2 days ago

    Angela collier just uploaded a video about how embarrassing the original article is. Frankly, the situation is ridiculous and speaks to the brain rot experienced as some people become more reliant on these chatbots, regardless of how intelligent seeming some individuals may start off.

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    191
    arrow-down
    1
    ·
    3 days ago

    He lost his chat logs. I was going to mock him him for not backing up his work but I’m not sure I want to punch down on someone who considers his LLM chat logs research.

  • public_image_ltd@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    This moron is even labelled as a „professor“ at cologne university. There goes my respect for the German educational system.

    • RamenDame@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      I wonder if he could get face consequences because he was sloppy. I mean he is a Beamter, a Prof, and he has to lead by example. Here is the science policy of the University. He is responsible.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      edit-2
      3 days ago

      As someone who knows a decent bit about machine learning, I can say Dr. Collier seems to know very little of how it works on a technical level, which presents such a beautiful contrast when I’m watching her be fucking dead-on about its societal consequences. Seeing someone who isn’t familiar with the field but still knows what they’re talking about is so cathartic in a subject strewn with misconceptions from laypeople. Angela’s integrity is really admirable; you can tell she takes a lot of care not to overstep into areas she doesn’t understand.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Yeah, I agree. Sometimes I see people who are criticising AI in terms of its societal consequences, and whilst they’re decently on the mark with that, they say things that are just straight up wrong on the technical side. It makes me wince, because I worry that incorrect info may end up serving pro-AI discourse instead.

        Angela Collier is good at avoiding falling into this trap, and she does so by not pretending she knows more than what she does.

        • Nalivai@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 days ago

          I see your point, and it’s a valid one. It’s just so exhausting that you need to be an expert in the thing you hate and that’s obviously destroying society in order to talk about it’s effects on society. It’s like that with gun-nuts, if you don’t know the specific differences between AR-15 and M16A1, you’re not allowed to have an opinion on how easy it is for a child to get one and societal problems stemmed from it.

        • TheTechnician27@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          3 days ago

          I always do. I write as a hobby, and I’m not stuck here, so I don’t see why I’d be here if I didn’t give enough of a shit to write what I want.

        • Kogasa@programming.dev
          link
          fedilink
          arrow-up
          2
          arrow-down
          6
          ·
          3 days ago

          At some point, we need to stop caring or stop interacting entirely. It’s so exhausting to have to question the authenticity of every interaction.

    • Flauschige_Lemmata@lemmy.world
      link
      fedilink
      arrow-up
      19
      arrow-down
      1
      ·
      3 days ago

      If he only used it as a tool, I’d still say it’s his work. It should of course be included in the methodology section.

      But considering he lost 2 years worth of work, just because his chats got wiped? That’s more than just a tool

        • turmacar@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          2 days ago

          “Oops all the research was set on fire” is a tale as old as grad students.

          In the 2000s it was trusting everything in a multi-year study to a single thumb drive. In the 70s it was carrying the only copy of everything in a single briefcase. Presumably at some point someone was devastated that the rain came early and wiped out all the work on the sand table.

        • adb@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Well we are talking about the guy who pressed the delete everything button and then was surprised that everything was deleted.

          • Nikelui@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            2 days ago

            Yes, but all was deleted were the ChatGPT chats. Is that all the research produced in 2 years? Probably nothing of value was lost.

            • adb@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              2 days ago

              Probably not. It seems misleading to characterize what he loss as his research, and a blatant clickbaity lie to claim it was all of it.

              Bucher admitted he’d “lost” two years’ worth of “carefully structured academic work” — including grant applications, publication revisions, lectures, and exams

    • filcuk@lemmy.zip
      link
      fedilink
      arrow-up
      3
      arrow-down
      17
      ·
      3 days ago

      Is it the LLM’s research? Do guns shoot people?
      Researching other people’s work is still research. If they start claiming other people’s work is their work without having added any value, then it’s obviously stealing, but otherwisethis argument makes no sense.

      • Nalivai@lemmy.world
        link
        fedilink
        arrow-up
        20
        ·
        2 days ago

        When you type “dear robot girlfriend, do my work, make no mistakes” into a chatbot window, and then use the slop that the average word predicting machine shat out, you’re not doing research. In fact, you’re not doing anything besides frying your brain while frying the planet.
        It’s not “research” at all, but whatever that is, llm generated it, you did nothing.

        • filcuk@lemmy.zip
          link
          fedilink
          arrow-up
          3
          arrow-down
          11
          ·
          2 days ago

          So if I just quickly search Wikipedia or list through a few books, is that a research? How many prompts or hours until my unnamed activity becomes research? Is it a hard limit, or just based on how hateful you’re feeling for the day?

          • Nalivai@lemmy.world
            link
            fedilink
            arrow-up
            9
            arrow-down
            1
            ·
            2 days ago

            If I go to the restaurant and order something, is that counts as me cooking? How many times I need to point to the waiter at the menu and ask them to bring me something, until I am officially count as a professional cook? If I ask them to make it less salty and add cheese, is it counts as the restaurant employing me as a chef or only as a liner cook?
            Just in case your chat “research” fried your brain completely, and it needs to be spelled out: no, to be called cook you need to cook the food. To do research you need to do research, not ask a word prediction machine to do it.

          • Nalivai@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            2 days ago

            Despite what your clanker wifu told you, just saying the word “strawman” doesn’t actually constitutes as a proper position.

            • MrWildBunnycat@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              2 days ago

              You just made an assumption on how the research was done, that’s your strawman. Then you attacked me and assumed something completely not true, that’s ad hominem. You’ll be hard pressed to find someone who hates Artificial Idiots more than me, but I also hate bad faith arguments that reduce any person to a handful of stereotypes or a caricature. Instead of throwing insults try engaging in constructive dialogue.

  • mech@feddit.org
    link
    fedilink
    arrow-up
    49
    ·
    3 days ago

    ChatGPT’s chat history isn’t “carefully structured academic work”.
    He basically used the 0/0/0 backup strategy, with not even one instance of his data saved anywhere.

  • gustofwind@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    ·
    edit-2
    3 days ago

    He disabled the feature because he “wanted to see whether I would still have access to all of the model’s functions if I did not provide OpenAI with my data.”

    Well I guess you found out

  • melsaskca@lemmy.ca
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    If is was really “Intelligence” then it would have saved a backup copy first. “AWTF?” is a more fitting name for it.