Forget about microcode.
Microcode is CISC.
Microcode is like a computer in a computer, it increases the complexity of the whole system, slows everything down, makes testing miserable, and many other "sins" that RISC addresses.
A direct mapping of the instruction word to the datapath is the best way to have a simple and efficient ISA.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.
Keep in mind that microcode can be used for any type of controller, not just CPU. I wrote Hex2Mem (converts Intel Hex file stream to RAM memory write cycles) and Mem2Hex (the opposite) using microcode in FPGA, and a simple TTY to VGA (like poor-man's VT52). Other choices for this would be extremely complex FSM or embedded core in FPGA with much overhead. Also I disagree about debugging - trace microinstructions can be embedded in the microcode making it very easy to observe the execution paths. https://hackaday.io/project/182959-custom-circuit-testing-using-intel-hex-files
Are you sure? yes | no
Hi @zpekic,
you use "microcode" to actually program and IMHO there is little wrong with that.
Also remember that my "principles" in this project are (as stated in the main page) to understand how I design my own CPU architectures. and I'm definitely in the RISC camp for reasons I elaborate here.
The issue with microcode as a CPU cornerstone is that when you have the circuit to execute the microcode, you can execute the user code directly as well. RISC instructions are microinstructions that are directly fed to the core. This greatly helps pipelining (efficiency), keeping things simple and orthogonal, easy to program and compile for, yadayada, I won't repeat Patterson & Hennessy's books :-)
There /was/ a time when microcode was necessary, for example in the 60s and 70s when RAM was not fast (or large) enough yet (ah, ferrite...) and microcode could add some speed boost by having a smaller & faster program that could "interpret" complex operations. But this grew out of hand and the big RISC wave and silicon DRAM, as well as large enough caches, tilted the balance back to direct code execution.
Seymour Cray's designs are considered faithful precursors to RISC and even if the CRAY1 has several FSMs and other state logic, the instructions are directly executed : speed comes from not habing to interpret, the bits from the instruction word go directly to the relevant unit and you're done.
There was still a time when microcode was "necessary" : when the first microprocessors appeared, the small silicon area and limited number of pins required that each instruction was as compact as possible. But even then, microcoding was *not* the only way : the 6502 was not "pure" microcoding (OK there's a PLA and the design is totally inspired by the Motorola 68xx series) and I doubt that the CDP1802 was microcoded, instead it devoted a large area to internal registers and there was a simple FSM...
The only remaining microcoded microarchitecture is x86/ia64 due to market pressure. Intel *did* make RISC processors : i860 and i960, which led to the Pentium microarchitecture (let's forget Itanium and iapx432). Today microcode is used only to performe "complex" instructions, which most other architectures have made useless...
Are you sure? yes | no
Absolutely agreed that fast and efficient CPUs can (and should!) be made without microcode. For example, pipelining can be thought of as "unrolling" sequence of microinstructions in hardware and putting them in temporal and physical order. However it is still a powerful tool in the hobbyist CPU/controller designer toolbox - for example it is possible to have hybrid architectures where "native" is executed directly in hardware (FSM / pipelines) and "emulated" through microcode. Or 2 or more similar processors could be implemented just by varying the microcode (e.g. 8080, 8085, Z80, 64180), or even 6502/Z80 together able to execute both instruction sets (the ultimate sacrilege to both Spectrum and Commodore fans LOL). I used this to flip Sinclair Scientific and Texas Instruments calculator modes for the TMS0800 project.
I have few project ideas in mind how to twist and play somewhere in between - for example what if 9-bit Am2901 microinstructions would actually be part of 16-bit assembly instructions, how would an instruction set for such processor look like and its assembler. Sadly, there is always more project ideas than time :-(
Are you sure? yes | no
@zpekic I agree that your case is quite specific and out of the scope of the principles of this project :-)
Are you sure? yes | no
then i need learn more about microcode, any tips where start ?
Are you sure? yes | no
https://en.wikipedia.org/wiki/Microcode is a good start, though Patterson & Hennessy's books go into great details to justify the RISC approach. Any good CISC vs RISC thread/debate will cover the shortcomings of microcode.
It's a convoluted and sophisticated subject that keeps fascinating people :-)
Are you sure? yes | no
microcode is just a sequence of control signal states to perform a op-code, any uc or cpu do things this way, call it microcode just alert for next low level.
Are you sure? yes | no
I tend to disagree on this.
Microcode is way more than "just a sequence of control signal states to perform a op-code".
Because it is a sequence, you need a sequencer. So you have a state in a state. RISC might use "microcode" under your wide definition, but it is a flat decoded/expanded instruction word with almost no state to care about. This saves a lot of complexity !
Are you sure? yes | no
But there is sequencer in all ones, when share bus, when need wait for a result, etc. Change the signals need be sequences.
Maybe any reference for it ?
Are you sure? yes | no
@barcellos.alvaro sequencing can be done in several ways... But FSM is not microcode and vice versa :-)
Are you sure? yes | no
Unless you program the CPU in microcode ;-)
Are you sure? yes | no
so essentially Intel is bad and should feel bad lol
Are you sure? yes | no
Note that my "commandments" apply to modern developments, and my own set of heuristics to design new cores. If you want to play with CISC, go ahead and have fun :-P
Anyway, CISC was a natural choice in the 70s. Intel just stuck to the successful lines. In the 80/90s, Intel tried lean and clean architectures, you must "remember" the i860 and i960.
Don't forget the EPIC/Itanium, as well as their collaboration with Analog Devices on the Blackfin.
Clearly, Intel is doing "business", not "evangelising" :-)
Are you sure? yes | no