• jasory@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    17 days ago

    “Making frequency analysis ineffective”

    Oh boy, let’s hope nobody uses it for large plain texts. If x maps to k1,K2,… then one simply needs enough instances of x to reconstruct the key. It must at the very minimum need multiple symbols to map to the same strings to achieve ambiguity.

    The cryptographic claims seem laughable.

    • hereforawhile@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      17 days ago

      It must at the very minimum need multiple symbols to map to the same strings to achieve ambiguity.

      It does this.

      The only conventional cryptography is the shuffle function which takes entropy from the OS.

      • jasory@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        16 days ago

        What motivated you to write this program?

        Your choice of “codebook”, is an immediate red flag and reeks of pop-crypto. There is a reason why this approach was abandoned some 100+ years ago, even properly implemented they have severe shortcomings.

        • hereforawhile@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          16 days ago

          What motivated you to write this program?

          Just for fun basically.

          I’ve had the idea for awhile but the problem is was always a huge amount of grunt work to get the initial database created. With the use of LLM I basically mined all the unique entries, common phrases.

          I’m not claiming it’s the best or anything at all. But for codebook standards…I tried to implement all the things that would make a good code book.

          • Ability to say the same thing over and over and make it look different for mitigation against frequency analysis.
          • Easy, secure, shuffling
          • customizable
          • Assisted composing
          • Exportable
          • Long term rotating key schema
          • Conclusive and established database
          • Portable
          • jasory@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            13 days ago

            Why did you use an LLM for the frequency tables? The “most common words used” is very useful data and as such there are many already existing compilations, used by things like spell checkers. The Linux system dictionaries are one example.

            The fact that you completely ignore that simply using a larger RSA key would both be faster and more secure than your approach, doesn’t inspire confidence either.

            (It’s also in python which is basically unusable. )

            • hereforawhile@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              11 days ago

              I used a LLM to create my database because it is not only a collection of words, but common phrases. Plus not only can the LLM format the database how I want it so it’s interpretable to the program, it can build the database and included all the appropriate amount of duplicates.

              The fact that you completely ignore that simply using a larger RSA key would both be faster and more secure than your approach, doesn’t inspire confidence either.

              The goal was to not use any modern crypto… Codebooks have been used for a very long time and are secure with proper key management.

              This is an attempt at a modern codebook. It tackles most all of the shortcomings of previous iterations.

              (It’s also in python which is basically unusable. )

              Haha.

              • jasory@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                9 days ago

                “but common phrases”. These also exist, they are used in grammar checkers. They also exist in texts for English learners.

                Datasets like these are very easy to come by. In fact you could actually write a program that set up a Markov matrix of pairs of words for any input text, and use it to determine common phrases. This is the standard sloppy approach, a more clever one would restrict the pairing to grammatically valid ones.

                • hereforawhile@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  8 days ago

                  I mean what’s the real point you are arguing? I’m happy to include other datasets in the master database. A bigger database is no problem for this schema or SQLite limitations.

                  The LLM produced all these things with one or two prompts and they are all grammatically valid… It’s just what I happened to source the initial data set from.