In case you have ever used a segmented programming model, you'll understand that it sucks. You have to move things around all the time. Pointer calculations and comparisons stink.
Or, if you need extension bits for addresses : it makes life miserable. It looks so simple and harmless at first... But it forces you to keep track of auxiliary data all the time !
Banked memory ? That's terrible because you have to partition your software, and if your platform evolves, or granularity changes, you're forced to change your SW accordingly. Who uses banked memory extensions today, except Microchip's PICs ?
So if you have a N-bits processor, then this should be your pointer length.
Are you making a 16-bits processor ? Then be happy with 64K bytes.
If you need more then please, avoid the use of all the kludges likes segments, extensions etc.
Today, it's far simpler to choose the right CPU with the right pointer size from the beginning.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.
For a custom CPU made out of small-scale ICs or discrete components, I find this rule tricky. 16 bits with a 64 KB address space seems too small for me. But 32 bit registers are too large. It's hard to imagine using 4 GB of address space, and doubling the number of flip-flops and the ALU width is painful in terms of size.
20 or 24 bits are attractive, but they violate your previous the power-of-two rule (which I also agree with).
24 bits is three bytes and might be an interesting compromise?
Are you sure? yes | no
The MC68000 used 24 address bits but with reserve for more :-)
The 8086 is very well known but I think the 65816 also has an address extension. But as soon as you need more than one register to handle addresses, things get "complicated" and upwards compatibility requires the extension mechanisms when a simple width expansion is enough.
Addresses must be stored easily, computed, compared, you must be able to perform basic arithmetic (array indexing, boundary checking...) and when you need more than one register, you increase the code size and execution time.
My approach is to consider the application and domain of use, and select the register width. Some architectures are quite scalable, like the YASEP where 16-bit code could run on a 32-bit core with few adaptations. Usually, the Most Significant Bit of very large addresses can be ignored in hardware, so a 32-bits architecture can implement only 24 address bits (for example, like the MC68000).
So now the question is : what do you want to do with your system ? :-)
Are you sure? yes | no
I have always been a strong believer in this. I learned assembly on the 6809, and when I had to use intel 8086 assembly later I was immediately annoyed by the whole paged memory deal. When even later I looked into 68000 assembly it was so nice compared to the intel. I've always thought if the mainstream computer would have not gone to the PC but to a 680x0 based computer, computers would have advanced so much more over the years.
Are you sure? yes | no
I have the same feeling :-D I have followed a similar CPU path.
RISC processors finally restored sanity and I can't imagine programming a 6809 anymore... but the hard lessons can sometimes be easy to forget. Convenience can be a sweet trap, particularly as you start your design. A bit later and it's too late.
Are you sure? yes | no