By whygee on Sunday 14 July 2013, 17:24 - Architecture
The YASEP is my playground so I can experiment freely with computer science and architecture. It lets me consider the whole software and hardware stack, not just some pieces here and there.
A recent conversation about the IPC system (Inter Process Call) reminded me of a characteristic that I once considered. It is quite extreme, but it opens interesting possibilities : limit the code size in YASEP32 to 16 bits addresses. That makes 32K half-words or about 20K instructions per thread. Of course, there would be no absolute limit in the number of threads.
But why cripple the architecture ?
Because the YASEP is meant to be an embedded CPU, running a microkernel OS, and here is why it is possible and desirable.
- The imm16 field is available to most opcodes and it provides the best code density.
- Embedded systems and microcontrollers usually run small applications. There is little chance that 20K instructions get exhausted by a single thread, for a single purpose. Once you reach 10K instructions, you have already included libraries, system management code, and of course tons of data, and all of these can (and should) be kept separated.
- Microkernels spread, divide and separate system functions, using "servers". My idea would be to push this approach and mentality down to the user software level. And there is no way to coerce SW developers to NOT write bloatware without "some sacrifices".
- GPUs already have such an approach/organisation (small chunks of code for discrete functions) for hardware and software reasons. "computational kernels" are quite short.
- In the "microkernel" spirit, shared libraries become a special type of server. In POSIX systems, the flat address space of each thread maps to each shared library, and memory paging flips the cards in the background. A huge pointer (and big arithmetics) is necessary. In an early YASEP32 implementing multiple threads, shared libraries become possible without adding memory paging. A bit like a hardened uClinux. Code (and the return stack) would be a little bit more compact (no 32-bit constants for code addresses) and reaching external resources would just require knowing the "extension code", the ID of the thread that provides the feature in a "black box".
- Such an approach is possible because "IPC is cheap" now. In the "flat linear space" systems used in POSIX, the burden of protection is on the paged memory (TLB) and the fault handlers, who can take hundreds of cycles to switch to a different task... But the YASEP's IPC is marginally more complex than a CALL.
- Only code would have limited address ranges. Data pointers are not affected. I have programmed and used MS-DOS systems in 16 bits environments and the constant problem was how to access data, not code. Of course there are certain applications that need more than 20K instructions but they need to be split, the PC of these times had "overlays" and other kludgy mechanisms that I don't want to revive.
- Modularity. Design software so its components are interchangeable, hence a bug here has less chances to affect code there. Or hot-swap a module/server/lib while the system is running. Microkernels can do it, why not applications ?
- Safety is increased by the extra degree of separation between any "sufficiently large chunk of bugs^Wcode" to prevent or reduce both accidental and malicious malfunction.
But for this approach to be realistic, cheap IPC is not enough. Sharing data, fast and safely, between "servers" is another key !
TBD (to be designed)
20200411:
The is something that must be discussed today at #NPM - New Programming Modeland the number of instructions is increased a bit with instruction pointers that address 16 bits...
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.