Need I remind you that 6-bits bytes are artefacts of the past ?
Use only natural powers of two :
1, 2, 4, 8, 16, 32, 64, 128...
There are countless great reasons to stick to that. Mainly because if you don't, you end up with all kinds of weird stuff.
It's OK if you reserve a 2^n space and you don't use all the bits, though.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.
why not trinary? why binary?
Are you sure? yes | no
It's a great question ! And remember that this page/project is only "recommendations" and "explanations of my own projects and choices".
@SHAOS is the ternary specialist IMHO :-)
From the little I have read about ternary, the argument is that the natural logarithm base e=2.718... is closer to 3 than 2. There is a claim that it would "naturally follow" that a "trit" would be more efficient at representing information, and indeed there is a 50% bonus, but then, why not another number >1 ?
The very first question is what you represent with your symbols.
The mathematical argument also collapses in light of the physical constraints and components. Relays provide a lot of flexibility and diodes expand it even more, for example when you work with +/off/- signaling (tied to +, tied to 0V or floating) but we all know that this is not the future.
The "future" is quantum and if you work with spins for example, it's clearly binary. There is no known way for now to perform computations with gluons or chromodynamics.
The present is the transistor and it is a tripole : the input shares a reference point with the output. This restricts the range of topologies considerably with respect to the old relay, but their size and speed is still unmatched, so we have to deal with them.
Doing ternary with transistors is obviously complicated so there is no point in using ternary for high-speed or high-density logic. One can argue that Flash storage cells use 2, 3, 4 or more levels but this is a slow and delicate process that is affected by countless parameters, such as temperature, thermal noise, electrical interferences from neighbouring signal switches... The more levels you have, the more data are "volatile", thus adding costs to the read and write circuits, more expensive and slow.
Also consider that tape storage works as "hole/no hole". What would a 3rd state be ? anti-hole ?
For practical reasons, I use binary, like everybody else, but nothing keeps you from trying to research further in this niche domain, because the unknown is by definition where there is the most to discover and develop.
https://en.wikipedia.org/wiki/Ternary_computer has more references today than some years ago and I suppose you have explored it already at length :-)
If I extend the page's advice to ternary, it becomes "use powers of 3", so you would end up maybe with 9-trits instructions, while numbers & addresses would be 27-trits (that would range from 0 to 7625 billions, which is more than 2³²).
Compared to binary, you have less choice for convenient sizes because the exponential rises more steeply.
2 => 4, 16, 256, 65K, 4G
3 => 9, 81, 6561, 43M, 1E15...
Less choice means also less flexibility to dimension data with the fewest symbols or parts possible. What word sizes would be the most convenient ? So my mathematical counter-argument would be that binary fits more tightly to data and despite ternary's claimed higher density, there are also more unused symbols for a given word (assuming that words are powers of the base and we represent natural numbers with a linear spectrum)
Anyway this is still an open question, even though binary has the most practical advantages. Binary is also much more well known so more research in ternary could tip the scale one day.
Are you sure? yes | no
Yann, could you please explain why?
What is wrong, from your opinion, with 5 or 6 bits CPU / computer? Or even with a 12 bits computer?
A discrete (transistored) computer with 6bits words would be much easier to build than an 8 bit one.
Are you sure? yes | no
Early computers used various sizes, for example CDC and PDP used 6, 12, 15, 18, 30, 60 bits per word. They had "fun times" when trying to talk to other families.
The last modern architectures using non-powers of two I have met are the ADSP21xx, that mixes 16-bits data (with some extensions for the MAC) and 24-bits instructions. The ADSP21k (SHARC) extended this to 32-bits data (internally 40 bits for floating-point) and 48 bits per instruction. Communication between these spaces was somewhat awkward.
12-bits and 18-bits CPUs have a charm but they are inherently limited to word-wide processing. They can be good for certain applications but are difficult to use outside of them.
One example of things not falling in place in a 18-bits computer : barrel shifter. You need a 5-bits operand to fit a number between 0 and 17 and this leaves 14 unused codes.
With a 12-bits computer, you have only 4 bits but 4 codes are unused.
Using powers of 2 makes it all work naturally :-)
Are you sure? yes | no