

Thanks! Your answer led me to this, which kind of explains it:
https://en.wikipedia.org/wiki/Word_(computer_architecture)
Character size was in the past (pre-variable-sized character encoding) one of the influences on unit of address resolution and the choice of word size. Before the mid-1960s, characters were most often stored in six bits; this allowed no more than 64 characters, so the alphabet was limited to upper case. Since it is efficient in time and space to have the word size be a multiple of the character size, word sizes in this period were usually multiples of 6 bits (in binary machines). A common choice then was the 36-bit word, which is also a good size for the numeric properties of a floating point format.
After the introduction of the IBM System/360 design, which uses eight-bit characters and supports lower-case letters, the standard size of a character (or more accurately, a byte) becomes eight bits. Word sizes thereafter are naturally multiples of eight bits, with 16, 32, and 64 bits being commonly used.
So it has to do with character size, earlier six bits and today one byte/eight bits.























Thanks! I have no idea what endianness is, except for hear “big endian” in some CS-related presentation a while back… I’ll read up on it!
As for my questions and your answer, would it be correct to say then that it’s about scalability? That one byte being eight bits scales efficiently in binary?