I'm really quite rusty on percentages, so this is /likely/ a math-error on my part...
But, here's the quick of it:
I'm measuring the number of events in a time-period... the error could be +-1 time-unit and/or +-1 event.
With my current numbers, this gives a worst-case error of +2.5% to -2.4%.
Now: if i do dimensional-analysis via integer-math to convert from events/its-time-units to events/human-readable units, I'm calculating worst-cases of +2.5% to -1.9% error.
The measurement-error /decreased/?!
Can that possibly be?
.....
Here's the setup:
49 clock pulses are measured in 263 samples. Say either or both could have a measurement error of +-1.
Samples occur once per 21 CPU cycles.
The clock pulses occur at 41666.66666 clocks/sec
Now I want to calculate the cpu frequency.
263(Samples)/49(clk) * 21(CPUclk)/1(Sample)
* 41667(clk)/1(sec) = 4.6965M (CPUclk/sec)
....
Because I'm doing this via 8bit z80 assembly, I'm limiting this to 16bit x 8bit multiplication and division.
The tricks, there, involve recognizing some facts about the exact constants involved...
21*41666.66666 is exactly 875,000
No measurement error, here... (Though, of course, it presumes the clock is exactly 41666.666666Hz, but, I guess we'll set /that/ as our anchor for these error calculations).
OK, I can deal with KHz for the CPU clk... so, can divide 875,000 to 875.
But, 263*875 can't be done with my 16x8 multiplier. So, note that 875 is exactly 175*5.
Then 263*175 fits in 16x8 mult...
Giving 46025
Now, that's too large for any additional multiplication, so let's do the division...
46025/49=939.29... but, of course, this is integer math, no rounding, giving 939,
Now, I'm a bit worried that .29 adds to the error, but we'll come back to that.
We still need to multiply by 5...
939*5=4,695KHz
Earlier, in floating-point, I calculated
4,696.5KHz
Not bad for integer-math, no rounding... but was I just lucky with these numbers?
....
So, obviously, the two numbers differ, and obviously /that/ error came (mostly?) from my integer-math. Of course dropping decimal places is going to introduce error. My concern is /how much/?
Let's start with the potential measurement-error.
263/49 = 5.367
Worst-cases occur when each measurement is off by 1 in opposite directions:
264/48=5.5, 102.47%, +2.5%
262/50=5.24, 97.63% -2.4%
...
Now here's me going through it step-by-step... again, it's entirely likely I'm wrong...
263*175: whatever error was in 263 has now been multiplied by 175... 263 was +-1, so 263+175 is +-175. Sounds awful... but it's 175 in 46025, so less than 1%. And since 175 is exact, it should be exactly the same percentage-error as for 263.
Next step is divide by 49. 46205/49. We've got +-175 error up top, +-1 error on the bottom... I dunno how to math percentages like that... instead we'll look again at worst-case: divide by a smaller number gives a bigger number... So, 48... put that at the bottom of 175... the result of 46025/49=939.29 is off by 175/48, worst-case... in either direction. +-3.646. But, this is integer math, not rounded... 939. And rounding down the decimal place could result in an error of up to -0.999999=-1. So our +-3.646 error becomes -4.646 to +3.646 in 939.29... looks tiny, but the math is not complete. 939 gets multiplied by 5, and so does that error...
.....
Fail.
Yeah, I wrote that up after calculating it on paper... and after going over the paper calcs several times...
This fail has been days in the making.
No... if it's 48, then it's not off by 175/48, but off by 46025/49 - (46025 + 175)/48, which is dang-near spot-on the original +2.5%, right?
Gah! Don't think that's right either...
Forget this... I dunno what I'm doing, here.
I am, however, pretty sure my *875 trickery shouldn't introduce nearly as much error as the measurements themselves., and should get the job done.
Tangent abandoned... if brain allows.
...
It doesn't.
Ok, here's the deal, yes: I think the error /range/ decreases, but the probability distribution (i made that up, yeah?) changes, too... it's more likely to be in more error than it was, but it can't be in as much error. Or something. And somehow that's caused by the lobbing off of the decimal... the greatest error comes when one measurement is off in one direction, and the other is off in the other direction... but, realistically, that can't happen (right?), because both measurements (time and sample) happen simultaneously, inherently.
So somewhere therein, the reality of the situation is that /both/ measurements /inherently/ are integers, and fractional differences between them /can't/ exist, so therefore, inherently, if done properly, the math /using/ integers is more accurate than that using floating point... OR SOMETHING. Heh.
And the /potential/ error-range has to take that into account, too...
It can't be off by 1, but it can be off by 0.9999... and that difference actually matters. If you're going to try to figure out this crazy endeavor.
I'm lost. Somewhere in there it all makes sense.
I need to get this stupid thing functioning, for my sanity. Teeny tiny error be dammed. Heh.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.