• 0 Posts
  • 35 Comments
Joined 1 year ago
cake
Cake day: March 9th, 2024

help-circle

  • Sorry for the late reply - work is consuming everything :)

    I suspect that we are (like LLMs) mostly “sophisticated pattern recognition systems trained on vast amounts of data.”

    Considering the claim that LLMs have “no true understanding”, I think there isn’t a definition of “true understanding” that would cleanly separate humans and LLMs. It seems clear that LLMs are able to extract the information contained within language, and use that information to answer questions and inform decisions (with adequately tooled agents). I think that acquiring and using information is what’s relevant, and that’s solved.

    Engaging with the real world is mostly a matter of tooling. Real-time learning and more comprehensive multi-modal architectures are just iterations on current systems.

    I think it’s quite relevant that the Turing Test has essentially been passed by machines. It’s our instinct to gatekeep intellect, moving the goalposts as they’re passed in order to affirm our relevance and worth, but LLMs have our intellectual essence, and will continue to improve rapidly while we stagnate.

    There is still progress to be made before we’re obsolete, but I think it will be just a few years, and then it’s just a question of cost efficiency.

    Anyways, we’ll see! Thanks for the thoughtful reply










  • the principal hypothesis of the bitcoin experiment is that a central ledger and issuer is not actually necessary, and it’s still going strong

    central banks are a hell of a lot better than the hodgepodge that arose in the 1800s, but it’s not proven that they will outlast an adequately designed decentralized implementation (whether it’s bitcoin or something else)

    there are plenty of problems down the road for bitcoin, but there are arguably more for central banks. can a centralized currency survive the failure of its backing empire?









  • I use various models on a daily basis (as a software/infrastructure developer), and can say that the reason they are able to sell AI is that it’s really useful.

    Like any tool, you have to work with its strengths and weaknesses, and it’s very much a matter of “shit in, shit out.”

    For example, it can easily get confused with complicated requests, so they must be narrowly focused. Breaking large problems down into smaller ones is a normal part of problem solving, so this doesn’t detract from its utility.

    Also, it sometimes just makes shit up, so it’s absolutely necessary to thoroughly test everything it outputs. Test-driven development has been around for a long time, so that’s not really a problem either.

    It’s more of a booksmart intern assistant than a professional software engineer, but used in this way it’s a great productivity booster.