The Hammer18 circuit fits well inside the NRZI unit and instantly delivers fantastic results. Just as expected. That will be my Christmas then!
Here are the results after 10 millions of injected errors:
1 : 2241925 - **************************************************** 2 : 5543183 - ******************************************************************************************************************************** 3 : 1691752 - **************************************** 4 : 369784 - ********* 5 : 112181 - *** 6 : 32917 - * 7 : 6360 - * 8 : 1401 - * 9 : 377 - * 10 : 84 - * 11 : 21 - * 12 : 12 - * 13 : 2 - * 14 : 0 - 15 : 0 - 16 : 0 - 17 : 1 - *
The little 1 at the end is an initialisation bug in the program.
Otherwise, the 4x slope is very apparent: the system has achieved true 2-bit-per-word performance!
There is a little "bump" at the start, 1/4 of the errors are caught immediately, but the next cycle catches 1/2! Then every number is divided by 4 as expected.
- CD0:115 : 115 errors were not caught and passed as the first 0-filled word of a control sequence.
- CD1:6443188 : 2/3 of the detected errors triggered the C/D bit and the rest of the word was not 0. That's 56027× the number of data that passed with a 0.
- Err:3556696 : the rest (1/3) was caught as number errors: either the number was out of range or the MSB was 1.
I'm still unable to explain why the CD bit catches 2× more errors than the other methods, though I'm not sure it matters. However, we have a way to extrapolate the error handling capability.
10 millions (almost 24 bits) give 2 errors at 13 words, 3 more words (4^3=2^6=64) will give about one error in 640 millions (close to 1 billion).
Notes:
- the error model that was tested here is just one bit. Results will vary a bit depending on the error model. More bits and at different positions will affect the curve a little, but not radically.
- Adding another 0-word during C/D transitions will get us in the 5 billion ballpark for rejection. This is actually a requirement since the gPEAC has a one-word latency (hence the bump at the 2nd word) and an error could come at the last data word and go unnoticed, so a second 0-word acts as a checksum check.
- Since the NRZI+Hamming circuit does a LOT of crazy avalanche, now comes the time to check if a more basic binary 18-bit PEAC could work too. I'm looking back at old logs, to find some already-calculated data, and there is
- 19. Even more orbits ! : primary orbit of 18 : 172.662.654 (instead of 34.359.738.368 to pass, or 0.5%)
- 44. Test jobs : 18: Total of all reachable arcs: 68719736689
- 90. Post-processing : Width=18 Total= 34359868344 vs 34359869438 (missing 1094)
In fact I now realise that I have very little clue about the topology of w18. I'm taking care of this at 181. PEAC w18.
And I still need to fix this tiny little bug in the program, that leaves one uncaught error. I didn't notice it before because I always got many leftovers but that bug still appears with no NRZ or Hamming avalanche, even after thousands of cycles : my test code must have a problem somewhere.
....
And it's a weird issue with something that does not clear a register somewhere, it's taken care of by double-resetting the circuit, 2 clocks seems to solve it but what and where... ?
But at least I can get clean outputs:
100 errors:
1 : 23 - *********************** 2 : 61 - ************************************************************* 3 : 13 - ************* 4 : 0 - 5 : 3 - ***
1000 errors:
1 : 229 - ******************************** 2 : 582 - ******************************************************************************** 3 : 154 - ********************** 4 : 23 - **** 5 : 8 - ** 6 : 3 - * 7 : 1 - *
10K errors
1 : 2236 - ******************************** 2 : 5625 - ******************************************************************************** 3 : 1635 - ************************ 4 : 351 - ***** 5 : 108 - ** 6 : 31 - * 7 : 11 - * 8 : 3 - *
100K errors:
1 : 22343 - ********************************* 2 : 55555 - ******************************************************************************** 3 : 16765 - ************************* 4 : 3771 - ****** 5 : 1121 - ** 6 : 348 - * 7 : 79 - * 8 : 14 - * 9 : 2 - * 10 : 1 - * 11 : 1 - *
1M errors:
1 : 223252 - ********************************* 2 : 554817 - ******************************************************************************** 3 : 169397 - ************************* 4 : 37354 - ****** 5 : 11003 - ** 6 : 3274 - * 7 : 701 - * 8 : 151 - * 9 : 40 - * 10 : 8 - * 11 : 3 - *
10 millions:
1 : 2231957 - *********************************
2 : 5546325 - ********************************************************************************
3 : 1695563 - *************************
4 : 371915 - ******
5 : 112766 - **
6 : 33113 - *
7 : 6437 - *
8 : 1426 - *
9 : 369 - *
10 : 98 - *
11 : 24 - *
12 : 3 - *
13 : 2 - *
14 : 2 - *
len:1 CD0:138 CD1:6449726 Err:3550136 Missed:0 Ham:1 NoNRZI:0
The progress 5, 7, 8, 11, 11, 14 has some hicups... Maybe the PRNG is not random enough?
Anyway, it is great to finally get rid of the "long tail"! Look at this amazingly compliant logplot!

The 10M slope converges to 15, thus 16 words would be good for 100M. High-safety protocols would still work with 16-word buffers but keep the last one in quarantine too.
And here is another logplot that compares the slope versus the number of (consecutive) flipped bits.

Get it there : miniMAC_2026_20251230.tbz
Yann Guidon / YGDES
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.