-
An annual update
12/30/2019 at 01:38 • 2 commentsI know my updates on the project have been slow, but I really am working on it. It's been a bit of a challenge given a bunch of headwinds, but I'm excited to report that I've made some significant progress after a lot of refactoring work. I have unified the code for a bunch of the different dev boards I've been using, and so now in theory the same code will run on at least the DE10-standard and DE2i-150. The MAX10-lite hasn't been tested yet, but will probably work as well, even with the limited onboard memory resources.
In addition to the SoC code itself, I also spent a lot of time bringing the toolchain up to date. So now I have branches off of the master branch of gcc, binutils, and newlib which are current as of a week or so ago and appear to generate proper code.
There's also a lot of work that's been done with verilog and the unit tests for both the "microcoded" and the pipelined version of the Bexkat CPU. They both run the same tests, and get the same results - with one exception.... exceptions. :-) The issue is that my original ISA pushed both the CCR and the PC onto the stack before jumping to the ISR, and for a pipeline model that's not ideal. I'm thinking about a redesign that will require the ISR to push and pop it, but I haven't implemented it yet. Until then, technically the microcoded CPU is the one that works correctly, since the other just ignores the CCR.
I'll be doing another push of the code to the public github repos in the next week or so, which should give a picture of what's been done.
-
More Pipelining
04/13/2018 at 14:13 • 2 commentsIt's been a while since I made an update, but I am making progress in fits and starts. I ran into some roadblocks with the pipelining when introducing exceptions and some of the other vagaries of a real design, and so went back and thought through some of my assumptions. It turns out that I had a major error in how I understood the Wishbone bus specification.
In short, I had struggled with how to deal with latency with a pipelined operation when coupled with multiple masters/arbitration. If you allow bus preemption, it seems like you can lose data or have to reply requests, which doesn't make sense.
---------- more ----------I ended up changing the design to eliminate preemption in the bus arbiters, and adjusting the logic to make sure that even the instruction fetch operation releases the bus every few cycles. This has simplified the bus flow a great deal. So I may still have Wishbone wrong, but it works for me.
I've also moved to a Harvard architecture, which means that I can avoid (explicit) arbitration by using dual port memory. Once I have the kinks worked out and I feel that all of my memory access tests are behaving as expected, I can add arbitration for things like external memory and run regressions to validate.
-
Pipelined
11/27/2017 at 04:40 • 1 commentDuring the holiday break, I was able to make a significant amount of progress on the pipeline logic. At this point, I have everything working with the exception of subroutines and... exceptions. Subroutines (push old PC to memory stack, update the PC) shouldn't cause too much trouble, and while I'm not sure if there are going to be surprises in the exception handling, I'm expecting it will be similar to the existing branch code.
---------- more ----------Before I can really implement this into the FPGA (and then test things like DOOM!), I'll also need to deal with a couple of other pesky issues. The first is to resolve the bus interactions. Right now I have a dual read memory and separate bus interfaces for instructions and data. I may adapt my memory cache interface to SDRAM to act as a L1 cache within the CPU. The other challenge is multi-cycle memory access. Right now I assume I can access memory (instruction and data) in a single cycle, and obviously that's not always going to be true when pulling memory from SDRAM. I have some ideas on how to I can address this with stall logic I've built, so we'll see.
The code is all available in GitHub as mattstock/fpgalib.
-
Pipelining and Simulation
11/15/2017 at 18:05 • 0 commentsI'm returning to this project and made a few interesting improvements recently. The first is that I cleaned up the verilog for the CPU core so that it could be built in Verilator, a pretty slick tool that takes a Verilog module and defines it as a C++ object. You can then attach it to a test harness of your choosing to validate your work, check for regressions, etc. Until now, I've been relying on tests on the FPGA systems themselves, and leaning heavily on the logic analyzer functions that Quartus provides to debug. It works and is very powerful, but it's also quite slow and has limited flexibility. This change, coupled with a new initialized RAM module allows me to compile and run arbitrary code pretty easily.
The main reason I went down this road is because I was planning to do a redesign of the CPU to support pipelining. I've made some progress here as well, building a 5 stage pipeline that at least seems to move the proper data and signals around.
My challenge with pipelining in general is that most of the textbooks I've seen handwave over one of the most fundamental structural hazards - what to do when the instruction and data memory are on a common bus. I decided to "solve" this problem by building the CPU core with two logical busses (data and instruction), and to marry them to a dual port RAM module. Since the instruction bus will never do a write, this works well and will be sufficient to test out the pipelining.
I don't know how other designers solve this problem in the real world, but my plan is to link the CPU to an L1 cache, and have the cache layer deal with the vagaries of the "outside" bus. This should also reduce the number of clock cycles required in each pipeline phase. Right now my bus access logic requires two clock cycles minimum, but I think I could reduce this to one without too much effort. I'm kind of working on the pipeline stuff one issue at a time, since I don't really have a good reference to crib from. If anyone has any suggestions on something that's not crazy complex and would help give me some direction, leave them in the comments.
-
CPU Architecture Video
02/13/2017 at 02:20 • 0 commentsHere's the next video, which goes into more detail about the CPU design as well as walking though the state transitions for a simple add operation:
-
System Overview
02/13/2017 at 00:36 • 0 commentsI'm trying to get more documentation in place, in the form of some youtube videos. This one will give you a sense of the overall system architecture, and how the CPU interacts with other devices. Let me know if you have any questions or comments.
-
Supervisor mode
01/02/2017 at 00:09 • 0 commentsI've been working on fleshing out a supervisor mode with a goal towards being able to do multiprocessing in the unix way. The basic work is complete (protected opcodes, hardware and software interrupts that execute in supervisor mode, etc), but I'm working on the nuance now. In particular, I'm testing different ways to pass information from user space into kernel space. Since my current method of parameter passing is solely via the stack and the stack pointer swap out as part of the move to supervisor mode (supervisor stack pointer), this is mostly an exercise in C semantics now. My exception handler pushes the original stack pointer onto the supervisor stack before jumping to the exception handler, and so now I'm just working though the most sane way to reference that element (which isn't an argument to the interrupt handler!), and then use it as an index to pull out the other info on the user stack I care about.
-
ISA Rework
01/03/2016 at 20:36 • 1 commentMy first cut at an ISA was focused on getting the functions right, and leaving room to add more options later. Now that I've got most of the functionality I want, I can go back and look at ways to reduce the complexity, with a goal of improving performance.
I made some fairly large changes to the ISA, which I've documented on the second page of the ISA worksheet. The idea was to reduce the number of data paths in the CPU, embed some elements (e.g., the FPU operations) into the opcodes instead of the microcode, and to add additional states in the control state machine to allow as much reuse as possible.
The steps that I've taken so far seem to have worked. The FPGA compiler is indicating a new max core speed of about 75MHz, when before it was closer to 50MHz. I'm also using about 5k fewer LE's in the FPGA for the same work. Since I've added clock cycles in some of the opcode paths and I haven't actually increased the core speed, I don't know yet what the true speed improvement is, if any. But it's a lot easier to understand (even for me), and so I'll count it as a win even if it was a wash on the performance front.
Once I get the cache controller and some regression testing done, I'm going to look at pipelining and upping the clock speed, at least for the processor core.
-
Testing Part 2
01/03/2016 at 20:25 • 0 commentsAs a mentioned earlier, I've been looking at pushing to the next round of project improvements, and that meant a better testing process. I tried using a "control" CPU, which would be compared to the output of the CPU under test, however that assumes that the number of clock cycles required for each operation wouldn't change. While useful in a few cases, a lot of the changes I'm interested in involve timing, and so that wouldn't work.
I decided instead to make two ROM modes. The default one runs the monitor code, which allows for basic memory interaction as well as parsing of ELF binaries on the microSD to bootstrap other programs. The new ROM module is a set of POST routines written to progressively test the CPU as well as IO functions to check for functional regressions. This method has already paid for itself, since I found a small bug in a couple of the floating point opcodes.
The method of test is fairly simple. I need to assume that some basic operations work, otherwise it won't even run the POST, which means immediate load of a register, immediate add, integer compare, and branch if not equal. The first tests evaluate register operations, the ALU and FPU. Then we test stack operations, branch tests, and all of the load and store operations. For the math and branch operations, we can compute the expected result and store them in the code, and generate an error when the result isn't as expected.
In addition to the basic CPU tests, I'm also implementing a set of memory tests. This will allow me to better test the cache module, which I'll describe in the Doom project update.
-
Regression Testing
12/12/2015 at 15:01 • 1 commentSo far in these projects, I've been able to build iteratively and not run into too many nasty bugs. There are many layers of abstraction though (libraries, compiler, assembler, machine, CPU), and so when a bug does crop up, it can be really challenging to find.
Most recently, I found that I had misunderstood some subtleties of transferring data between registers. The fix was simple - an opcode that zero fills the upper bits when you make a copy of an object smaller than the register size. But how this manifested itself was that sometimes printf() printed out the wrong character when printing a number. Eventually, I was able to isolate this to 33 % 10 resulting in 9 (not 3), which meant I didn't have to debug libc. After further narrowing the issue down to making a very small test case, I was able to see why the CPU was generating the incorrect value. That probably took me 4 days to debug.
As I plan on making some radical changes that could break things, I need to consider how best to avoid introducing more of these kinds of issues, and if it happens, how to quickly determine the issue.
The best idea I've got right now is to leverage the space I have within the FGPA and build more stuff. So since I plan to start trying to reduce the size of the combinatorial paths within the CPU which could effect timing, I'll create a second CPU. The new CPU will be the one I modify, and the first one will be my canary. I can feed them all the same data in parallel. The output from the canary will not be connected to the rest of the system, but will instead feed into a testing module. They module will also get taps from the second processor, and if the outputs diverge it can throw a signal that I can catch with the debug tools.
The nice thing about this is that it's fairly lightweight, and it will allow me to immediately see if the timing has changed. It doesn't rely on any other device in the system, and so I don't need to worry about special test programs or anything like that, however if I did have a program that did some additional self-tests it would be beneficial.
I'm curious if anyone has other ideas on how to build the equivalent of unit tests for systems of this complexity? I never got into the simulation aspects of Verilog - is that something that is worth the time to retrofit, or is the benefit of simulation more pre-synthesis?