-
Enhanced Split Displacement
05/31/2024 at 07:34 • 0 comments(obsolete)
Thanks to the comments on the FB group, there are updates to the mechanism !
Duane said :
Yann's proposal is to replace a branch instruction's plain N-bit PC-relative address field with two fields: an M-bit signed offset added to the program counter's middle bits, and
an L-bit page offset that replaces the lowest L bits of the PC. Various values of M will work. His example was for M=2.The page-offset address mode of the 6502 microprocessor is the M=0 case. A short branch could go anywhere within the current page being executed, but only in that page. Going anywhere else involved forming the target address in another way, such as indirect addressing through a pointer cell in page zero. PDP-8 had the same M=0 address mode.
If the branch op and the target op lie in different pages, the short branch cannot be used, even if their actual distance in words is small. How did users and tools cope with this? I think, by restricting such routines to lie entirely within a single page. If a routine crossed a page boundary, the program was rebuilt with a filler before that routine so that it began on the next boundary. The wasted space could maybe be partially filled by shuffling in smaller routines from elsewhere.If L=0, you get conventional PC-relative addressing. This is easy for writers and tools to use. You can tell whether a given displacement will fit into the instruction at assembly time, just by counting how many instruction words lie between the branch and target.
Yann's example is with M=2 and L>0. The short branch will work, if the interval between branch and target spans at most 2 page boundaries. If the programmer and tools know nothing about how code chunks get positioned at run time, they must use the short branch format conservatively. Only for shorter distances that will work regardless of their actual offsets within page.
However, the tools could arrange that longer routines are moved up to the next page boundary, like in the 6502 and PDP8 cases. With that help, the tools could allow somewhat longer reaches between branch and target. This does not help for routines larger than 3 pages.By picking M bigger than 2, the cases benefitting from page fillers get larger. But this means shortening L, and losing some of the hoped-for benefits for carry propagation delays or cache indexing.
So this is a great summary of the last log.
First new trick:
Now let me introduce K : the field with the index inside the cache line. Usually it is 2 or 3 bits for 32-bit instructions, for 128-bit and 256-bit wide lines respectively. This shortens the L field but does not change the range because they are simply concatenated.
But if M is +1 or -1, this can be qualified as a "inter-page" jump and there is a chance of cache miss or such. And compilers tend to align the targets of such branches (or calls) to the beginning of the cache line to reduce the penalty of having to fetch a second line immediately. Usually, branches get aligned to the first half of the line, so a 0 can be added between L and K. This doubles the range of the M field, so inter-page jumps get boosted to +2 and -2 pages.
The cost is a whole line of MUX...
Second trick:
The other trick (which is complementary) is to use M=2 to indicate:
00 : current page 01 : +1 page 10 : -1 page 11 : custom/indirect page
This adds another register to the architecture, which might be unused... Duane was referring to the 6502 and PDP-8 "zero page" scheme but the beginning of the positive range of instruction addresses might be used already.
This creates a symmetry between the forward and backward directions and allows easy full-range jumps.
So far I also keep L+K=12, with K=3 then L=9, or 512 lines of L1 Icache to directly access. This should be large enough while still be fast.
Result:
Jump/branch/call offset can then reach : 4096 arbitrary instructions in the current page, or 8192 locations both in the next or previous page, as well as the extended (indirect) page.
MM=00 : current page => addr= ...uuuuuLLLLLLLLLKKK MM=01 : next page => addr= PC+1|2LLLLLLLLLK0KK MM=10 : pref page => addr= PC-1|2LLLLLLLLLK0KK MM=11 : indirect page => addr= indRLLLLLLLLLK0KK
It's now more complex, for sure. We're far from the Canonical MIPS structure but it's somewhat efficient without being too cumbersome.
The L field is shifted left when M!=0, that's an OR2 gate, and fanout is only 8 which is still somewhat tolerable. It's still less effort than an adder!
Now, I guess the indirect page register is going to be messy to design properly...
Update:
The MUX to shift the address is a nuisance, but there is one simple trick to avoid it: shuffle the bits a little to reduce the decoding effort! Here is the new version:
MM=00 : current page => addr= ...uuuuuLLLLLLLLLKKK MM=01 : next page => addr= PC+1|2KLLLLLLLLL0KK MM=10 : pref page => addr= PC-1|2KLLLLLLLLL0KK MM=11 : indirect page => addr= indRKLLLLLLLLL0KK
So the MSB of K gets moved above the MSB of L and replaced by 0.
The new issue now is that there is a new type of granularity for the pages : 4K instructions for the current, and doubled pages that are not aligned, so the PC's MSB incrementer is less straight-forward.
Is it worth all the efforts ?
-
"Split displacement"
05/29/2024 at 00:46 • 0 comments(obsolete)
Let's break even more habits in processor/ISA design!
It started with a discussion on the Minimalist Computing Facebook group. Have a read. Duane Sand as always is very helpful, and others have contributed interesting data points. That group is gold!
In the end it appears that a sweet spot exists for displacement (instruction address relative to the current PC) around 14 bits :
- 12 bits of direct/immediate address bits for the LSB
- the remaining 2 MSB are sign-extended to provide actual relative pointers.
With 11 bit of direct address field, it is possible to address a 8KB instruction cache, 12 bits makes it 16KB. More might not be more useful.
The 2 more bits are required to eventually increment or decrement the PC. This could be 3 eventually... But the aim is to have as many "direct" bits, copied verbatim to the Icache address decoder, without bloating the instruction too much.
It becomes amazing when considering a branch that is not in a Branch Target Buffer.
- Cycle 1 : the instruction register's direct field is brought to the Icache line decoder, a read cycle is initiated. In parallel, the MSB are sign-extended and brought to a simplified and shoter adder.
- Cycle 2 : the (partial) computed PC MSB is brought to, and compared with, the tags that have been read during cycle 1. A cache miss can be detected already.
The nice thing is that there is no need to wait for a complete address to be computed, before reading the cache which only needs about 8 address bits (plus 3 MSB for word selection).
The other nice thing is the added would be only 20 bits wide (in a 32-bit machine and 12 bits of direct address (32-12=20). So the adder should be faster, smaller than a typical CPU. This counts because as I said in the comments on Facebook, "At 5GHz, everything is slow".
The range is a bit weird but that's a good compromise anyway. It can be symmetrised by increasing the MSB portion to 3 bits.
The other thing, which the comments brought, is that this almost defeats the "position independence" of code. Well, the size of the direct field will impose a granularity for relocation. At 12 bits, this makes a 4K instructions "step". But PIC (position independent code) is not seen as a critical feature in F-CPU and Y32 because other features provide the equivalent functionality.
-
32 bits
05/08/2024 at 00:51 • 0 commentsBye bye 32-bit FC1, the missing link between the 64-bit FC1 and the 16-32-bit YASEP is now a fully standalone project: #YGREC32 !
It is mostly a playground for all the methods and structures developed for FC1, so it's not really a cleaned-up YASEP. The use of a dedicated control stack does not fit well in the YASEP which will remain a "microcontroller". The YGREC32 is an application processor for multitasking environments, still suitable for real time but not heavy lifting. YGREC32 binaries would be easily executed by FC1 with little circuits since it's mostly a subset.
-
SRB : Smooth Register Backup (it was nice)
04/05/2024 at 19:27 • 0 commentsFC0 had a great feature, one of its defining mechanisms, which was invented to make its 64 registers easier to manage during thread/context switching.
Each register has a hidden "dirty" flag, all the flags are set when the switch starts, then each time one register is accessed by the new program, it gets swapped with the saved version (if the flag is set). This provides a progressive replacement, which does not saturate the memory bus as much as a bulk save.
FC1 has 64 registers, too, but with 4 D-cache blocks, 2 ports each, so a broadcast instruction can send 8 registers to memory in one cycle. Saving the 56 scalar registers takes 7 instructions, 7 cycles. Massive parallelism makes SRB and its state machine moot, at least for the full-featured, full-width version.The goal is to keep the frequency high, hence the register set must be kept small and simple. So far it's 16 registers with 3R1W ports, replicated 4 times. Using a dual-banked register set doubles the size and reduces the clock frequency. This would be interesting for a multithreaded version, with 2 or 4 simultaneous threads to flood the EUs with useful operations to perform, the lower clock freq would be compensated by a better efficiency/reuse of the units/lower latency. But an idle register set is out of question: use the caches.
And the save/restore mechanism could freeze a number of cache lines on each D-cache block for a faster sequence.
-
Register spill
04/05/2024 at 14:18 • 0 commentsIn earlier logs, I was confident and convinced that the control stack should not be used to store general registers, for many reasons, such as :
- The control stack has 2 fields and only one would be used, leading to waste,
- Only one access (read/write, push/pop) would be possible per cycle (at first),
- Scheduling the write would be complicated, since several cycles might be required to read the register, thus delaying the effective operation...
But then I looked deeper at the IPC/IPE/IPR instructions and I'm reconsidering and adapting my previous position: yes there is a good case to save/restore a limited amount of registers on the control stack, and the scheduling issue is not an actual problem in this case.
The key is that the register spill on the control stack is only permitted for limited types of operations. The user code must still provide its own register storage space to save/restore the registers, which would be faster anyway (since FC1 has 8 read/write ports, it would take 6 cycles).
IPC has a few convoluted things to do, though it can be reduced to a jump where the 16 LSB come from the instruction's immediate field, and others (the new MID, the target module) from a register (<<16). The MSB are set such that the new address points to the "trampoline" address space. IPC writes a pair of values on the control stack: the frame pointer and the address of the IPC instruction itself. Then the processor goes into the "IPC_call" state, loads the new cache line and must check that the target instruction is IPE.
IPE is where more magic happens. If the target is not the expected opcode, it traps (nice, some registers are already saved), otherwise IPE is executed and pushes a new pair of values on the control stack: the Module ID and some metadata (such as general capabilities).
Then the target code must be able to check metadata from the caller: somehow the processor must write data to a given register, which needs more scratch space to setup a new environment. This can easily be solved by an ABI so the target registers are known to be overwriteable, but the control stack is progressively used for more and more functions like exception handling, traps, and more: the ABI can not be always enforced and the called code should not overwrite the state of the caller (or exceptions might fail to recover).
A scratch register area (like Itanium) is possible but moves more complexity to the SW and opens the gates of race conditions. So the stack itself is the best place, as it provides flexible, local, on-demand space. Traps can be nested without microcode or compromises.
So here comes the new class of instructions : SPILL. It writes a register to the stack (along with its number) and optionally writes metadata from the caller's state (such as TID, MID, metadata, it might evolve). So we get the SPILL imm, reg instruction that pushes the register on the stack while also overwriting it with the requested data. SPILL 0, R1 will save R1 and overwrite it with 0, but other numbers will write a specific/useful/relevant value (probably up to 255 possible info).
Why not combine this with the SR read instruction ? Well, the access rights, timing and datapath might be different. SPILL will focus only on a few context-specific information (a subset of the SR that will not trap) so a trampoline can filter requests hassle-free, almost immediately.
Once all the metadata are checked, the code can jump out of the trampoline area and into the module.
Then before IPR, a corresponding number of UNSPILL instructions must pop the registers and restore them from the stack. If IPR is executed before, the spilled values are skipped from the stack and lost.
IPR will restore the module ID and capabilities, but another cycle is required to restore the instruction pointer and the frame pointer. And it gets a bit complex and quirky because several cycles are needed. The first step is to enable/select the proper addressing space, while the second step loads the caller's original IPC instruction. This could take "a while" because a page miss could occur... Then when the instruction is loaded, the processor knows which register was used as a frame pointer, which can finally be restored.
Important note: the IPC instruction is executed twice !
- First during call: push a pair of data and get the target address,
- Last at the end of return: the IPC opcode is re-checked, and the frame pointer is restored by reading the instruction again.
So depending on the state of execution, call or return, the instruction behaves differently. This also double-checks that the return has not been tampered with.
Furthermore, execution of SPILL and UNSPILL can be limited to the trampoline area.
...
And now that I think of it, the stack layout and the instructions could be modified, such that the return is faster, more convenient.
- IPC handles the frame pointer and the capabilities
- IPE/IPR handles the Module ID and the instruction pointer
This way, the IP is available at the same time as the MID for a faster return, in case the target is still in the instruction cache.
-
Read-as-Zero
04/02/2024 at 05:00 • 0 commentsThere are two cases where it is beneficial to flush or clear data without overhead : registers and data memory.
Register set protection
Registers would need to be flushed when calling an unsafe/unvetted module. In this case, it's a per-register attribute that can be set, for each globule, to mark the 4×16 registers as "trap on read" to prevent leaks without writing each register individually. The mark can be removed by writing a new value to the register. This is also a nice safety feature during debugging : the trap can be hooked to help trace a bug for example.
Memory initialisation
A whole cache line (256 bits ?) and even a whole page could be marked as "RAZ" to speed up initialisation. All the bytes get marked as "dirty" and "cleared", and the whole line gets written back to main memory during a flush/replacement. It is particularly handy for initialising a stack frame during a function call (or flushing it upon return).
-
Exceptions
04/02/2024 at 03:48 • 0 commentsThe "everything is a module" mantra makes many things easier.
One example is exceptions : when a handler is enabled and configured, almost any exception can be transformed into an IPC opcode to a given module (could be #1) with the entry point hardwired as the exception number << 16. That's 2048 entries with fast execution. Since IPC already saves some essential info on the control stack and the trampoline area acts as a table, there is very little extra HW to add.
Furthermore, since the trampoline zone is configured per thread, the exception handler can be swapped for a given thread, during debugging for example. Or a custom page table miss handler can be provided. This adds even more flexibility and allows novel extensions.
-
Access rights and capabilities
04/02/2024 at 02:43 • 0 commentsAs mentioned before, FC1 is built around a microkernel-like/actor organisation where there is no kernel, driver or user program, only modules.
So how does a module do its job ? Easy answer : its code must have certain rights.
And should the rights be tied to a module or a thread ? Both, in a way.
A "naked" thread (for example your basic "hello world" program) can still perform many functions by having no right by itself, but it calls modules that are endowed with specific rights and vetted by the OS. In the example, the naked thread calls a print function located in a different module, which can access the input/output operation itself, without leaking the "right" back to the caller. It's a sort of delegation.
A debugging program needs extra access rights, which should be preserved through calls to different modules.
But then the processor needs a way to vet each operation, either a specific opcode or access to a specific configuration register.
The processor knows the TID and MID (Thread IDentification number and Module IDentification number) which provide an index into an access right table, which is read again after each thread swap and/or IPC/IPR instruction. This is quite heavy and best done in software, for obvious speed and scalability reasons.
More pragmatically there are two simpler methods:
- The thread has a hidden "capabilities" word with a limited number of "rights" that are set by the OS during initialisation and ORed with the capabilities word of the module being executed. This limit in size permits only general/generic rights to be set, such as access to the stack, access to the code space...
- More specific rights are handled at the Special Register level, and tied to either a given module or thread : a single bit (MSB) selects whether the field is a TID or MID. This is preferred for scalability (there can be any number of these access right registers) and this tightens the security model, where usually only one thread or module has access to a given resource, limiting the chances of race conditions and abuse.
One exception : Thread #0 has all the rights since it initialises the system and manages all the other modules. It can then choose to endow a given thread with the required rights after some software filtering.
This dual system is flexible, scalable, and granularity can be adjusted.
- Usually a resource (for example exception table or page tables) is created at the HW level and one module (like page mapper ?) is assigned to manage it. Every module must be reentrant and enforce serialisation (through semaphores) so a "driver" is one module that safeguards one (or more) resource.
- If the resource is accessed by more than one thread at a time (a debugger ?) then the associated capabilities (access stack and/or code ?) are moved to the per-thread attributes. But this is less related to low-level HW access, which is guarded by Thread #0.
Both of these methods are easily implemented in HW without microcode or sophisticated circuits. In other words it's "RISC-friendly".
-
Pointers and conditions
04/02/2024 at 01:41 • 0 commentsThe project is about more than developing a processor : it reconsiders the whole programming model.
This had already started with the YASEP, following all the concerns we had found with FC0. FC1 is the testbed for the POSEVEN programming model and I try to anticipate the requirements and consequences.
POSEVEN follows several of the #PDP - Processor Design Principles, in particular the principle that every resource of a given thread should be accessed by a single pointer, but it does not follow the "flat model" that Linux or Windows use. Historically, it could look similar to some old IBM mainframes.
The reason is to avoid "segments" at all costs since they are a false good idea that make life miserable in the end, because it's not RISC at all. But "canonical RISC" itself has its downsides. I'm exploring that with the definition of POSEVEN and I have come up with several guiding principles.
The pointer's MSB is the private (1) / public (0) flag.
Threads can send pointers to other threads or blocks of code, but there is a need to protect "private data", just like in classic object oriented programming. The public addressing space is shared and accessed by every thread as a common pool, following rules enforced by paged memory protection. See later. The private space can only be accessed by the thread and contains all the necessary states.
The code and data spaces are strictly separated: this is a Harvard architecture.
Yet there must be a single pointer format to access all of it so the 2nd MSB means data (0) or code (1). No "normal" thread can access the code section as data, to prevent self-modifying code, exploration or alteration. This prevents introspection along with gadget-oriented exploits. Only a certain capability/right can write or read the code section during setup, to protect the integrity of the system, and this is not performed through typical data access to prevent hardware race conditions.
...
Stop for a moment. There are 4 combinations now:
- Public data : a pool of space where threads can store/send/receive blocs of data that don't fit in the registers.
- Private data : the internal state of the thread
- Public code : wait, what ?
- Private code : what the thread executes.
The list is not yet finished but it's already a bit weird. What is "public code" ? It's a space where the processor maps trampolines/entry points to other software modules. Oh, I should have introduced that earlier.
A thread executes code provided and vetted by the operating system. There is no notion of program, driver, kernel : only modules. Every code is provided by a module and each module is mapped to a code space. Invocation of a module requires loading the module, mapping it to the appropriate space, fixing addresses etc. then the thread jumps (with a specific instruction : IPC) to specific addresses in the trampoline area, where the called module "filters" the request. This trampoline area is shared and accessible through the public space, though page mapping (specific to each thread) can make each module visible, invisible, or even redirect one module call to another module.
The IPC opcode provides a direct 16-bit offset into the trampoline to avoid indirect access and reduce the hardware complexity of invoking external code. This requires support from the OS but the implementation is fast and simple. This provides 64Ki instructions to jump to, and the jump must land on an IPE instruction, which provides extra information from the caller to the callee, such as thread id or eventual capabilities. To return from the callee, the code uses the IPR instruction.As a consequence: a page for code has a granularity of 64Ki*4=262144 bytes.
There must also be a space that contains all the constants, in a read-only space that is shared by all the threads that execute the given module. This is not a candidate for the "public code" space because
- The code space is addressed with instruction granularity
- The data space is addressed with byte granularity.
Module constants may be mapped to the shared/public space or a s
The control stack is a third separate addressing space.
Yet another space is required for the control stack : is it neither code or data because the granularity is 2 words (either 2×32 bits or 2×64 bits).
This space can not be directly accessed : the control stack is a separate hardware/system and special instructions are required to read or write to it, if permitted by the thread's capabilities.
The thread can create its own software data stack to provide space for local variables : this is normally located in the private data space.
Normally, the control stack pointer is not accessible by the thread so there is never a dereference to a stack pointer. Thus there is no need to allocate an addressing space visible to the thread, however it must be mapped by the paging system.
_________
To support these features and others, the CPU must provide means to test a pointer. A certain class of condition codes is introduced to test the 4 MSB and 4 LSB of a register, because it can contain a pointer with metadata. The conditions
LSB0 (odd/even address, 16-bit aligned) LSB1 (32-bit aligned) LSB2 (64-bitaligned) LSB3 (128-bitaligned) MSB0 (private/public) MSB1 (code/data) MSB2 MSB3
are provided in the instruction set, they are easily implemented with a MUX and/or the "shadow" bits of the register set. A thread can easily verify that a pointer sent by another thread is valid and extract its properties (like the type of Aligned Strings).
-
Tagged Control Stack
08/08/2023 at 03:25 • 0 commentsTagged registers (see 14. Tagged registers) might not be the best idea ever. OK. But my latest musings about the stacks (#A stack.) and the support from various languages made me realise that a different view is necessary.
The usual C stack is a merged Data + Return + frame stack with many problems. FORTH (among a few others) uses a split system with Data separated from Return. And some languages implement their own data stacks for large variables that must be accessed by the caller.
There are also other things to consider : scopes.
Scopes can be a (set of local) variable name(s) or an error handler (try/catch) : they have their place on the stack because they need to be rewound in inverse order. I doubt it's a good idea to create a separate stack for error handlers, they belong to the "return stack" IMHO, otherwise how do you know where to reset the stack pointer ?
So I have just had this really weird idea : tag items on the return stack with a type.
For example we would have the usual call/return type for function calls, as well as the handler type for nested functions. A "canary" type would also help detect an error, for example. But imagine that the MSBs of the pointer (for example, could also be LSB so normal return goes to aligned words...) that is pushed on the stack flags an error handler.
- When a "normal" return instruction is decoded, the CPU keeps popping the return stack until a normal return pointer if found.
- When an "error" instruction is decoded, the CPU keeps popping the return stack until a handler pointer is found, thus skipping the normal returns and short-circuiting normal progress.
There are languages that would be happy with such a HW-based support, and I have no clue how to hack C to perform this trick. But this is a reasonable, useful, safe set of features that can be implemented in hardware, even at the risk of having the return instruction potentially taking more than one cycle to complete.
...
So the CPU must support two sets of instructions to share the return stack (which is separate and HW-handled) :
- The call/return pair, for usual function calls
- the handler/error pair for error handling.
- For/Loop could also be handled this way ! (in fact look at the FORTH idioms that use the return stack for more inspiration)
"error" acts like a "return" as seen before but the "handler" instruction is not a call, instead it gives an address to be pushed on the return stack, so the instruction format remains the same, and it does not affect the pipeline's flow.
Of course the compiler must also ensure that context for the error can be reached, the data stack is not affected by these instructions so extra care is required for the frame. But that's not something that hardware must try to optimise.
What do you think ?
Another use of the scope is to automatically free resources that have been allocated. Ideally room is allocated on the stack, but open handles or complex stuff must also be popped off from some sort of something. So maybe a copy of the data stack pointer can also be stored, just like the "push bp" idiom on x86 function prologs, but automatic here.
........................
A "canary" type might not be necessary if there is a stack limit register.
However another required type is the IPC : Inter Process Call, which must remember the caller's address, caller's thread ID and potentially some other metadata/masks/flags...
........................
20230908 : I'm trying to make a census of the required codes. 3 bits should be OK.
000 is invalid/reserved.
001 is for catch/error handling. It is missing a "frame pointer" to recover local data though.
010 is for normal call/return
011 is used by for/loop.
100, 101 and 110 are for IPC/IPR (110 is the return address, as saved by CALL, 101 would be the ID of the calling thread and 100 could be a bitmask of the "masked" registers to prevent data leaks, or could be credentials...)
111 could be used for "chaining" when/if blocks are moved from/to main memory, some DMA would be required to maintain a linked list or remember the position, with some granularity. That would be critical to speed up swapping a context during a switch.
Let's see if this works...
........................
Thank you FORTHers ! https://forth-standard.org/standard/exception