• 1 Post
  • 36 Comments
Joined 2 years ago
cake
Cake day: January 14th, 2024

help-circle










  • halfdane@lemmy.worldOPtoFuck AI@lemmy.world"phd-level reasoning"
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    11 days ago

    No, I’m not complaining that chatgpt is shit at reasoning - I’m demonstrating it.

    I’m complaining that literal trillions of dollars plus environmental resources are being poured into this fundamentally flawed technology, all while fucking up the job market for entry level applicants.





  • These models have to analyse and understand the meaning of a prompt rather than what is strictly said

    Well, it clearly fails at that, and that’s all I’m saying. I really don’t understand what you’re arguing here, so I’ll assume it must be my poor grasp of the language or the topic.

    That said, I salute you and wish you safe travels 👋



  • Ah thank you, now I see what you mean. And it seems like we’re mostly talking about the same thing here 😅

    To reiterate: unprecedented amounts of money and resources are being sunk into systems that are fundamentally flawed (among others by semantic drift), because their creators double down on their bad decisions (just scale up more) instead of admitting that LLMs can never achieve what they promise. So when you’re saying that LLMs are just fancy autocorrect, there’s absolutely no disagreement from me: it’s the point of this post.

    And yes, for an informed observed of the field, this isn’t news - I just shared the result of an experiment because I was surprised how easy it was to replicate.


  • I mean, this is just one of half a dozen experiments I conducted (replicating just a few of the thousands that actual scientists do), but the point stands: what PhD (again, that was Sam Qltman’sclaim, not mine) would be thrown off by a web search?

    Unless the creators of LLMs admit that their systems won’t achieve AGI by just throwing more money at it, shitty claims will prevent the field from actual progress.



  • But these systems work on interrupting the user’s input

    I’m not entirely sure what you mean here, maybe because I’m not a native speaker. Would you mind phrasing that differently for me?

    That’s got nothing to do with “PhD” level thinking, whatever that’s supposed to mean.

    Oh, we’re absolutely in agreement here, and it’s not me that made the claim, but what Sam Altman said about the then-upcoming GPT 5 in summer. He claimed that the model would be able to perform reasoning comparable to a PhD - something that clearly isn’t happening reliably, and that’s what this post bemoans.

    It’s just fancy autocorrect at this point.

    Yes, with an environmental and economic cost that’s unprecedented in the history of … well, ever. And that’s what this post bemoans.