-
Battery Pack with a dead cell?
08/21/2024 at 03:01 • 0 commentsGet this... Say your cheap 18V cordless drill has one bad NiCd cell out of 15 in the pack...
Say, for whatever reason, you just can't revive it, but all the other cells are fine-enough...
What-say you just cut that cell out of the pack, tie the surrounding cells together for 16.8V instead of 18. Now add two diodes in series at the charger's output. 0.6-0.7V each makes for 1.2V-1.4V which is darn near perfect.
Now you shouldn't have to worry about that cheap charger pushing in too much voltage...
Big friggin' whoop, your 18V drill is running off 16.8 now. Should Be Fine. At Least It Works Again!
-
The Cash involved in Cache
03/02/2023 at 08:12 • 1 commentI just saw a few minutes of Adrian's vid wherein he dug out a 386 motherboard with CACHE!
This is one of those long-running unknowns for me... How does cache work. I was pretty certain I recalled that 486's had it, but 386's didn't... which means: for a 386 /to/ have it requires an additional chip... And /chips/ I can usually eventually understand.
So I did some searching to discover Intel's 82385 cache controller and took on some reading, excited to see that, really, there's no reason it couldn't be hacked into other systems I'm more familiar with, like, say, a z80 system. Heck, with a couple/few 74LS574's, I think it could easily be put into service as a memory-paging system. up to 4GB of 8K pages!
But the more I read of the datasheet, the more I started realizing how, frankly, ridiculous this thing is, for its intended purpose.
I mean, sure, if it was an improvement making a Cray Supercomputer a tiny bit more super when the technology first came out, or a CAD system more responsive to zooming/scrolling, I could see it. But... How it works in a home-system seems, frankly, a bit crazy. A Cash-Grab from computer-holics, who /still/ post questions about adding cache to their 35y/o 386's for gaming. LOL.
Seriously.
Read the description of how it actually functions, and be amazed at how clever they were in implementing it in such a way as to essentially be completely transparent to the system except on the rare occasion it can actually be useful.
Then continue to read, from the datasheet itself, how carefully-worded it is that there's a more sophisticated mode that actually only provides a "slight" improvement over the simpler mode which provides basically no improvement over no cache, unless the software is basically explicitely-designed with cache in mind.
Which, frankly, seems to me would have to be written in inefficient ways that would inherently be slower on cacheless systems, rendering benchmarking completely meaningless.
So, unless I'm mistaken about the progression of this trend--as I've extrapolated from the advancement of the "state of the art" between the era of computers without cache, to the first home computers with cache--here's how it seems to me:
If programmers had kept the skills they learned from coding in the before-times, cache would've been nothing more than a very expensive addition (all those transistors! Static RAMs!) to squeeze a few percent more useable clock-cycles out of top-end systems, for folk using rarely-used softwares which could've been carefully programmed to squeeze even more computations out of a system with cache.
But, because programmers started adopting cache as de-facto, they forced the market who didn't need it /to/ need it. The cacheless systems would've run code not-requiring-of cache roughly as-efficiently as code poorly-written to use cache running on systems with it. Which is to say that already-existant systems lacking cache were hobbled not by what they were capable, but by programmers' tactics to take advantage of tools that really weren't needed at the time, and, worse, often for little gain other than to use up the newly-available computing resources.
The 486, then, came with a cache-controller built-in, and basically every consumer-grade CPU has required it, since.
Now, I admit this is based purely on my own extrapolation based on what I read of the 82385's datasheet. Maybe things like burst-readahead and write-back, that didn't exist in the 82385, caused *huge* improvements. But, I'm not convinced, from what I've seen.
Think about it like this: In the time of the 386, 1 measly MB of DRAM was rather expensive... so how much, then, would be 32KB of /SRAM/? The 82385 also had nearly 2KB internal SRAM/Registers just to keep track of what was and wasn't cached. And this all in addition to a huge number of gates for address-decoding, etc.
After all that taken into account, I'm guessing the 82385 and cache chips probably contained*far* more transistors than the 386 itself. Heck, the 82385 alone might've been roughly on-par.
All that, and, frankly, the 82385 really doesn't /do/ much, aside from intercepting and regurgitating reads when the data was already, recently, read previously. Folk talk about 20% improvements in benchmarks, but were those benchmarks written with cache in-mind? If so, then it's not an improvement in computing-power, but an improvement in handling new and generally useless requirements. And, again, frankly, looking at the datasheet, my guess would've been something like a best-case 8% average improvement of code not written for cache.
[Though, again, I admit the cleverness in coming up with this peripheral cache controller in such a way that it's guaranteed not to *slow* the system under any circumstances, due to its transparent design!]
.
Frankly, it seems to me a bit like a 'hack' trying to squeeze a few more CPU cycles out of existing CPUs. Clever, but certainly not the sort of relic that should've been built-upon. E.G. 64bit 486's might've been a comparative *tremendous* improvement, requiring maybe even fewer transistors. Dual-Core, similar... But cache? And the friggin' slew of rabbit-hole processing/transistors/power required? Nevermind the mental-hurdles of coding with it in mind?
For /that/ to have become so de-facto, so early, and yet at the same time so late, boggles my mind.
Again, think about it: What were folk running on 386s? Windows 3.1. Multitasking. BIOS-calls and TSRs via ISRs. Video games that pushed every limit. The idea *any* program would be under 32KB, nevermind *both* the program *and* its data, nevermind its being switched in and out of context... Absurd.
Pretty much the only places I imagine it being useful, without coding specifically for cache, is in e.g. small loops used for performing iterative calculations (like Pi, which is why I mentioned mainframes), and only then if there's no risk of the data being in the same 8K address-space as the loop itself, except in another page.
...
I'm sure I've lost my train of thought, but that brings us to another topic....
Where is data vs code located? A HUGE problem, for quite some time, has been buffer-overflows causing executable code to get overwritten. Why The Heck are we still using the same memory-space for code and data?
I propose that part of the reason may have to do with cache. As I understand it, from the 82385 datasheet. Now, don't get me wrong, I'm all for the backwards-compatibility that the x86 architecture has provided. But it wouldn't have been at all difficult to separate code and data into separate memory-spaces in newer prgorams *if it weren't for cache*.
Why? Imagine your code is 32KB. And you've got 8KB of strings to manipulate in data. Now, your cache was 32KB... But if your sprintf function looping through each character was located at byte 256 in the code-space and the string you're sprintfing to started at byte 257 in your dataspace, then the cache would "thrash" between the dataspace and the codespace's bytes 257++. OTOH, if your sprintfwas printing to a string defined in your codespace after the loop, there'd be no tharshing, because the loop would be around byte 256 and the string would be around byte 576. In The Same [cache] page. No thrashing necessary... both would be cached into the same cache-"page." [not that it would be a huge benefit that they're both cached, since sprintf would be *writing* which isn't at all sped up by the 82385, but at least the loop could be run from cache].
But, there's no reason it has to have remained that way (code and data being interwoven)... except, it seems to me, because when we discovered how risky buffer-overflows were, we'd already relied on our databeing interwoven with our code in order to reduce cache-misses.
The friggin' Pentium could've, quite simply, added one additional output pin, ala an address bit (which could feed into a cache controller) alongside MEM/IO, to indicate Code/Data, which would, for backwards-compatibility, not really even been used, but for new programs would've allowed for separate address-spaces, should the programmer decide to make use of the new feature (no different than MMX extensions). One additional "address bit" indicating code/data in the cache controller would prevent cache-misses, despite the fact the address offsets of the sprintf loop and the string buffer might be near each other. And now executable overflows are a thing of the past, only concerns with old code. Programmers jumped on efficiently coding for cache, surely they'd've jumped on this!
.
But, back to cache...
I fought cache issues back in the ARM7 days, which, if I understand correctly, were *long* before ARMv7, and whatever passes for the lowest-end ARMs used in smartphones today.
I could be entirely mistaken, but I gather that on-chip cache controllers, and on-chip cache SRAM, are de-facto today... on basically every processor short of microcontrollers (and even some of them).
Is it possible these huge arrays of transistors de-facto-included in any "current" processor--expensive, space-consuming, heat-producing as they are--are actually *hindering* the abilities of our multicore multithreaded systems, today?
I'm beat. Who knows.?
...
That said, I am a bit intrigued about the hackery possible in the 82385... Seriously, e.g. connecting something like it [preferably in a DIP] to a Z80 could be quite interesting. It seems to be, roughly a slew of 74646(?) address decoders and 1024x(!) 74574 8-bit registers, which could be quite useful in completely unrelated tasks, as well.
-
3/4 Quadrature Decoding - NOPE
02/21/2023 at 11:19 • 0 commentsThis is my own invention... Maybe someone else has done it, maybe it has someone's name attached to it. I dunno. (Maybe not, and I'll finally hear from someone who can get me a patent?)
The idea is simple enough... I'll go into detail after the first image loads:
A typical method for decoding a digital quadrature rotary encoder involves looking at each edge of each of the two signals. If One signal "leads" the other, then the shaft is spinning in one direction. If that same signal "lags" the other, then the shaft is spinning in the other direction.
This is very well-established, and really *should* be the way one decodes these signals (looking for, then processing, all the transitions).
This is important because of many factors which we'll see if I have the energy to go into.
...
An UNFORTUNATE "hack" that has entered the mainstream does it differently. It's much easier (but, not really).
The idea, there, is to look for the edge(s) of one of the signals, then when it arrives, look at the *value* of the other signal.
This is kinda like treating one signal as a "step" output, and the other as a "Direction" output. But That Is Not What Quadrature Is.
If the signal was *perfect*, this would work great.
But the real world is not perfect. Switch-Contacts bounce. Optical encoder discs may stop rotating right at the edge of a "slot", a tiny bit of mechanical vibration or a tiny bit of 120Hz fluorescent lighting may leak into the sensor. And obviously electrically-noisy motors can induce spikes in decoding circuits as well as the encoders'.
That "Step" pulse may oscillate quite a bit, despite no change in the "direction".
...
My friggin' car's stereo has this glitch. And now that the detents are a little worn, it's very easy to accidentally leave the volume knob atop a detent, where suddenly and without warning a bump in the road causes the volume to jump up, sometimes an alarming amount, even though [actually /because/] the knob itself still sits atop that detent.
This Would Not Happen if they decoded the quadrature knob correctly. That was the whole point of the design behind quadrature encoders, as far as I'm aware.
But then somehow it became commonplace to decode them *wrong*, and I'm willing to bet it's in so many different devices these days that at least a few folk have died as a result. (It's sure given me a few scares at 70MPH!)
"Hah! That's a one in a million chance!" Yeah? But if you plan to sell 10 million... Sounds like ten counts of first-degree manslaughter to me. What's that... Ten 20 year prison sentences? But, yahknow, since "Corporations are people" it means the end of said corporation, and prison sentences, not slaps on wrists, for those who knowingly aided and abetted.
.
.
.
So, let's get this straight:
Quadrature can be decoded correctly.
The "step/direction" method is NOT correct.
.
.
.
If the elite engineers who hold onto trade secrets like this technique are unwilling (or just too tired) to teach you, and there's even the tiniest chance a glitch could hurt someone, then please remember that the internet is full of seeming-authoritative information from folk who really have no clue what they're talking about, and then look into products designed for the task, like HP/Agilent's HCTL-2000, or similar from U.S. Digital.
.
.
.
Now, As A /hacker/ or /maker/ or DIY sort of person, I often try to solve puzzles like the one I faced tonight: How do I decode my two quadrature signals with only one Interrupt pin and one GPIO pin and no additional circuitry?
So, ONLY BECAUSE it doesn't really matter in this project, I considered the "Step/Direction" decoding approach, which I'd dismissed so long ago that I'd forgotten exactly /why/. And I had to derive, again, the *why*, which I described earlier ("Noise" on "Step" would cause the counter to run away).
.
I spent a few hours trying to think of ways to resolve that, and ultimately went back to my original technique, just using two GPIOs, polling, and history to monitor both edges for the expected changes.
The problem is, it's a bit overboard:
This encoder outputs something like 8000 ticks per revolution(!?) and 100 would be plenty for my needs.
And, it really doesn't even matter if it misses some steps.
So, I can afford to sacrifice a whole slew of "ticks", and (need to) save a bunch of CPU power, too.
...
I put on my thinking-cap and realized I could reduce the quadrature-decoding processing time (and resolution) by 3/4ths!
*And* do-so by modifying the "Step/Direction" technique.
And do-so without introducing the step-runaway problem.
And do-so without introducing, I think, any additional chance for "missed steps" (except, of course, the 3 of 4 I'm intentionally skipping).
...
Here we have two quadrature signals, 90deg out of phase, on X and Y. Y' is when it spins in reverse.
On the left, with all the tightly-spaced arrows, we have the typical decoding-scheme. Wherein, it's important to register every state/transition (I call these 'ticks'). Thus, if this were 10000 ticks/revolution and I spin it at most 1 rev/second, we need to sample those GPIOs at least 10,000 times/sec.
(Note that since every state must be traversed, in order, a noisy/edge-case input (as described earlier for "Step") results in a value that may oscillate, but will not, can not, increase repeatedly.)
.
On the right is where it gets interesting.
Say we power-up the device just left of the position marked A, where both X and Y are low. We configure the decoder to look for a rising-edge transition (e.g. interrupt) on X. Turn the encoder to the right, detect the rising-X ("[X] Step"), measure Y=H ("Right"), position=1.
Now reconfigure to look for rising-Y. Why?
If we'd've chosen Falling-X, we might get triggers due to the runaway-stepping problem, if it stopped near A, or if electrical noise was high, or if the output bounces.
If we'd've chosen falling-Y, it would be the same as the typical decoder scheme; too high of resolution for my needs, too much processing time... Reducing the processing time this much, I can get away with much-faster (3x!) knob-turning, without missed-steps or actually using any interrupts. GPIO-polling should do fine.
So, if it went right, to B, X will be low, position++. Left, at B', X=H, position--.
Now, at B, we look for X-Falling, three steps away in either direction. Or, from B' we look for X-Rising.
And So On.
...
The key-factor involves looking for an edge on one input, then immediately after receiving it, switch to looking for a different edge on the other input. Why? Well, think of it like the ol' pushbutton-debouncing technique of using an SPDT button and a set-reset latch. When the first edge comes through, it flat-out stops paying attention to that signal (set is set already. Any number of bounces on the set input won't change that). That signal is now debounced. So, now wait for the "pushbutton's" "release" to reset the flip-flop. Again, if it bounces it doesn't change anything.
This is especially relevant in quadrature because when there's one edge, the other input should be steady (high or low) by design. So...
BWAHAHAHA!!!! NOPE!!!!
A couple days later I see the problem.
NOPE!
Because bounce on the next falling edge would mean a rising-edge we're looking for. Two ticks too early. Where the "direction" signal is inverted.
NOPE.
.
So, still, we have to watch *every* quadrature edge.
-
Interrupts Are Stupid
02/16/2023 at 09:13 • 0 commentsImagine you've designed a clock...
Each time the seconds go from 59 to 0, it should update the minutes.
Imagine it takes three seconds to calculate the next minute from the previous.
So, if you have an interrupt at 59 seconds to update the minute-hand, the seconds-hand will fall-behind 3 seconds, by the time the minute-hand is updated.
...
Now, if you knew it takes three seconds to update the minute-hand, you could set up an interrupt at 57 seconds, instead of 59.
Then the minute-hand would move one minute exactly after the end of the previous minute.
BUT the seconds-hand would stop at 57 seconds, because updating the minute uses the entirety of the CPU. If you're clever, the seconds-hand would jump to 0 along with the minute's increment.
....
Now... doesn't this seem crazy?
...
OK, let's say we know it takes 3 seconds of CPU-time to update the minute-hand. But, in the meantime we also want to keep updating the seconds-hand. So we start updating the minutes-hand *six* seconds early... at 54 seconds. That leaves six half- seconds for updating the seconds-hand, in realtime, once-per-second, and three split-up seconds for updating the minutes-hand once.
But, of course, there's overhead in switching tasks, so say this all starts at 50 seconds.
...
At what point do we say, "hey, interrupts are stupid" and instead ask "what if we divided-up the minutes-update task into 60 steps, each occurring alongside the seconds-update?"
What would be the overhead in doing-so?
...
So, sure, it may be that doing it once per minute requires 3 seconds, but it may turn out that the interrupt-overhead is 0.5 times that, due to pushes and pops, and loading variables from SRAM into registers, etc.
And it may well turn out that dividing that 3 seconds across six will require twice as much processing time due to, essentially, loading a byte from SRAM into a register, doing some small calculation, then storing it back toSRAM to perform the same load-process-store pocedure again a second later...
But, if divided-up right, one can *both* update the seconds-hand and calculate/update the minutes-hand every second; no lag on the seconds-hand caused by the mintues' calculation.
No lag caused by a slew of push/pops.
...
And if done with just a tiny bit more foresight, no lag caused by the hours-hand, either.
...
Now, somewhere in here is the concept I've been dealing with off-n-on for roughly a decade. REMOVE the interrupts. Use SMALL-stepping State-Machines, with polling. Bump that main-loop up to as-fast-as-possible.
With an 8-bit AVR I was once able to sample audio at roughly its max of 10KS/s, store into an SD-Card, sample a *bitbanged* 9600-baud keyboard, write to an SPI-attached LCD, writes to EEPROM, and more... with a guaranteed 10,000 loops per second, averaging 14,000. ALL of those operations handled *without* interrupts.
Why? Again, because if, say, I'd used a UART-RX interrupt for the keyboard, it'd've taken far more than 1/10,000th of a second to process it, between all the necessary push/pops, and the processing routine itself (looking up scancodes, etc), which would've interfered with the ADC's 10KS/s sampling, which would've interfered with that sample's being written to the SDCard.
Instead: i.e. I knew the keyboard *couldn't* transmit more than 960 bytes/sec, so I could divide-up its processing over 1/960th of a second. Similar with the ADC's 10KS/s, and similar with the SDCard, etc. And, again, in doing-so managed to divide it all up into small pieces that could be handled, altogether, in about 1/10000th of a second. Even though, again, handling any one of those in its entirety, in say, an interrupt, would've taken far more than 1/10000th of a second; throwing everything off, just like the seconds-hand not updating between 57 and 0 seconds in the analogy.
-
If you ever wondered about weather predictions
12/22/2022 at 07:24 • 0 comments -
Hard Disk vs VHS reality-check
12/12/2022 at 06:05 • 0 commentsThere used to be special ISA cards that allowed for connecting a VCR to a PC in order to use the VCR as a tape-backup.
I always thought that was "cool" from a technological standpoint, but a bit gimmicky. I mean, sure, you could do the same with a bunch of audio cassettes if you're patient.
...
But, actually, I've done some quick research/math and think it may not have been so ridiculous, after-all.
In fact, it was probably *much* faster than most tape-backup drives at the time, due to its helical heads. And certainly a single VHS tape could store far more data. (nevermind their being cheap, back then).
In fact, the numbers suggest videocassettes were pretty-much on-par with spinning platters from half a decade later in many ways.
https://en.m.wikipedia.org/wiki/VHS
https://en.m.wikipedia.org/wiki/ST506/ST412
I somehow was under the impression the head on a VHS scans one *line* of the picture each time it passes the tape. But, apparently it actually scans an entire frame. At 60Hz!
In Spinning-platter-terms, it'd be the equivalent of a hard disk spinning at 3600RPM. Sure, not blisteringly fast, but not the order-of-magnitude difference I was expecting.
The video bandwidth is 3MHz, which may again seem slow, but again, consider that hard drives from half a decade later were limited to 5MHz, and probably didn't reach that before IDE replaced them...
"The limited bandwidth of the data cable was not an issue at the time and is not the factor that limited the performance of the system."
...
I'm losing steam.
...
But this came up as a result of thinking about how to archive old hard drives' data without having a functioning computer/OS/interface-card to do-so.
"The ST506 interface between the controller and drive was derived from the Shugart Associates SA1000 interface,[5] which was in turn based upon the floppy disk drive interface,[6] thereby making disk controller design relatively easy."
First-off, it wouldn't be too difficult to nearly directly interface two drives of these sorts to each other, with a simple microcontroller (or, frankly, a handful of TTL logic) inbetween to watch index pulses and control stepping, head-selects, and write-gates. After that, the original drive's "read data" output could be wired directly to the destination's "write data" input. Thus copying from an older/smaller drive to a newer/larger one with no host inbetween.
It's got its caveats... E.G. The new drive would only be usable in place of the original, on the original's controller, *as though* it was the original drive.
The new drive has to have equal or more cylinders and heads. It has to spin at exactly the same rate, or slower. Its magnetic coating has to handle the recording density (especially, again, if it spins slower).
BUT: it could be done. Which might be good reason to keep a later-model drive like these around if you're into retro stuff. I'm betting some of the drives from that era can even adjust their spin-rate with a potentiometer.
...
Anyhow, again, I've a lot to cover, but I'm really losing steam.
...
Enter the thoughts on using VHS as an only slightly more difficult-to-interface direct-track-copying method... as long as the drive (maybe Shugart or early MFM? is within the VHS's abilities, which it seems there may very well be such drives in retro-gurus' hands.
So, herein, say you've got a 5MB mini-fridge-sized drive with an AC spindle-motor (which just happens to spin at 3600RPM, hmm). Heh...
Anyhow, I guess it's ridiculous these days, each track could be recorded directly to a PC via a USB logic analyzer, right? (certainly with #sdramThingZero - 133MS/s 32-bit Logic Analyzer)
Or isn't there something like that already for floppies ("something-flux").
Anyhow, I guess the main point is that storing/transferring the flux-transitions in "analog" [wherein I mean *temporally*] has the benefit of not needing to know details like the exact bit/sample-rate, or the encoding format (RLL/MFM/FM], nor anything about higher-level details like the OS's choice of sector-size or hex values for padding bytes.
I guess this has all been done.
I just thought it interesting that VCRs are in many ways on-par with hard disks from the era. I knew the helical-scan thing was a clever solution, but had no idea it was *that* clever. Heck, cassettes aren't cleanrooms.
...
I guess maybe part of the idea was something along the lines of backing up the original minifridge drive onto tape, then dumping that back to a 5.25in drive of higher capacity (once a suitable one is acquired); their interfaces are basically compatible, and unlike IDE/SCSI, no knowledge of the actual *data*/format would be necessary in the transfer, nor would the new drive's actual parameters need to be identical to the old one's. Yet the old controller card would think it the same. I think that's intriguing. (oh, bad sectors in different locations might prevent this idea, heh).
It's not dissimilar to my finding out that throwing a 1.44M 3.5in floppy drive in place of a 400K 5.25in drive was perfectly doable in #OMNI 4 - a Kaypro 2x Logic Analyzer .
Not unlike using a duplicate of your favorite video-game floppy-disk to play from, rather than wearing-out the original... Except, instead of disks, we're talking drives, heh!
...
12/16/22:
The HaD blog just had a relevant article...
https://hackaday.com/2022/12/13/vhs-decode-project-could-help-archival-efforts/#more-567080
In the comments was this, which i'll peruse later: https://www.spiraltechinc.com/otis/IRIG_Files/IRIG_Chap6F.htm
-
"The Next Bill Gates"
12/06/2022 at 20:44 • 0 commentsWhen I was a kid, I heard this often...
Lacking in mutually-understood history and details of technicalities, it took me years to try to explain "Well, more like Steve Wozniac." And literally decades to realize how *that* was probably misinterpretted.
So, bare with me as I try to reexplain to several different audiences simultaneously.
First, OK, everyone knows Billy-G. I shouldn't have to explain that one, but I will.
As I Understand (I'm no history-buff):
Billy-G didn't design computers nor electronic circuits; he did software *for* computers. And his real claim to fame was actually software he *bought* (not wrote) from someone else.
Most folk who made the statement about me being "The Next" weren't aware of Stevie-W...
Stevie-W, unlike Billy-G, did electronics design, the actual computers themselves. That's a fundamental difference I was trying to get across, but couldn't convey in terms that really struck a chord.
In their minds, I gathered over many following years, the two were basically one-and-the-same, just from different companies. And the latter, then, was the "runner-up" that few outside the nerddom even know by name.
Not quite.
Billy-G: Software
Stevie-W: Mostly Hardware
Fundamentally different sorts of people. Fundamentally different skills. Fundamentally different aspects of computing. Maybe like comparing a finish-carpenter to a brick-layer.
Both, mind you, can be quite skilled, and the good ones highly revered. But therein lies the next problem in trying to explain to folk not already in-the-know: It seems many, again, associate a statement like that the wrong-way compared to my intent; thinking something like "oh, finish carpenters are concerned with minute *details*, whereas brick-layers are concerned with 'getting er done'" ish... I dunno what-all other people think, but I know I was yet-again misunderstood when I made such comparisons, so let me try to re-explain:
Well, no. My point wasn't some judgement of the skill-level or quality of craftsmanship or even about the utilitarian importance/necessity of what they do. My point was that what they do are both related to construction, but that we generally hire both when we build a house; because one is good at one thing, and the other is good at the other.
Maybe I should've chosen electricians and plumbers as the example, instead. But I'll never finish this if I open that can of worms.
Billy-G: Software that the end-user sees
Stevie-W: Hardware, and software in the background that most folk these days don't even know exists.
Which, probably, goes a ways in explaining why so many folk know the former, since his stuff is in your face, while the latter's stuff is encased in beige boxes.
.
Now, during *years* of trying to figure out how to explain this fundamental difference, without *ever* getting far-enough in the conversation to make my *main* point, the end-takeaway often seemed to be "The Next Steve Wozniac." At which point I was so friggin' exhausted... ugh.
.
So, for years I tried [and obviously failed] to rewire my brain to at least get that fundamental concept across to such folk concisely, *so that* I could maybe finally get across the next point:
....
There were MANY folk, probably *thousands,* doing what Stevie-W was doing before he and his work got "picked."
.
Now, when I say *that*, folk tend to, it seems, think I'm looking to "get picked." And, I suppose I can understand why they might come to such a conclusion (despite the fact we're nowhere near far-enough along in this discussion for conclusions to be jumped to) because I had to try to work on their level, and explain fundamental concepts from a perspective I thought they understood... which... apparently to me, requires names of celebrities to even be bothered to try to understand. Hey, I'm not claiming that *is* the way they are, I'm saying that as someone who has dedicated a huge portion of my life's brainpower to something few folk seem to understand, is it not reasonable to think that maybe I misunderstand such folk? That's how they seemed to me, so I tried to work on the level they threw at me. "The Next Bill Gates."
.
No, my point was not about "getting picked." My point was: You didn't even know who The Woz is before I told you... And He's absolutely a celebrity name you should know if you know Angelina Jolie and Natalie Portman and Microsoft and Apple and Bill Gates. But, you didn't. And just like you didn't know #2, you clearly don't know that #2's work wasn't really even revolutionary at the time. Things like the Apple I, for which he got picked, were made in garages and basements and bedrooms and dormrooms by the thousands at the time. Only *one* of those thousands "got picked." You see it every day, and yet you don't even know #2 exists.
I'm not trying to be #2,
I'm not even trying to be one of those thousands. I *am* the sort of person who *does* things similar to what those thousands did. The same sort of person that was so common in that era that there was a RadioShack for them in darn-near every small-to-midsize town across America.
Jesus, I'm so tired of this discussion.
You like working on cars? You trying to be the next John DeLorean? Unlike your dabbling under the hood when you turned 15.5:
This *is* my life's work. I started at 6y/o. I helped family and friends and even headed the school's computer lab at 10. I got two jobs, simultaneously, in the field at 15, *two* careers I kept for a decade. How dare you compare this to your hobby? But, likewise, how dare you compare this to some wannabe celebrity?
.
NOW:
There's a VERY interesting thing going on, these days, in the realm of plausibly bringing some understanding to the folk who need a celebrity to understand.
Apparently at the time before Billy-G was a name folk knew, before Stevie-W was a name folk in the know would've known, a computer was made, en masse that was essentially forgotten in a couple years time.
Just last month someone famous in these circles shared that he discovered an ebay seller who had been storing nearly a thousand of these computers for nearly 40 years, and was trying to sell them off. He spread the word. Now it's a sensation.
Man, that friggin "The Next..." intro was so friggin exhausting, I've completely lost sight of what I came here to write.
...
Here's the brief summary, maybe later I can give it the more words it deserves than "the intro" leached from me.
...
Announcement of "Vintage Computer You Never Heard Of, 1000 available. But no software exists for it, and it couldn't access it even if it did..." --A call to folk like those thousands of Stevie-W-alikes to try to turn it into something; reverse-engineering, hardware add-ons like disk drives, software programmers .. Gold Mined, Now it just needs to be refined!
Hours later: Son of one of its developers becomes an instant celebrity because he's got inside knowledge about the machine's inner-workings from his father's estate.
This thing might actually be capable of *doing* something.
Days later: Instant celebrity of the machine gains attention of many: including other original developers who have been letting their unloved masterpieces collect rust and dust for 40 years.
Original software acquired.
New affordable Method for loading/distributing software, devised.
Original add-on hardware, barely past prototypes, dug out of dust-heaps.
...
Frankly, I lost interest pretty early-on, as the part that interested me--the reverse-engineering (and later forward-engineering) effort I could've contributed that might've helped make this useless thing useful--was quickly rendered moot by masses of folk far better at it than I, then even their efforts were rendered moot by the discovery of original software, etc.
...
Now, I'm watching as someone--who saw his years of hard work result in nothing but 40 years of rust and dust in his basement, a guy who, at the time of thousands of Stevie-W-alikes, was not unlike the thousands of Stevie-W-alikes, except maybe in being in the top-100's, having been "picked" by a company that didn't get "picked" by the public--becomes [yet another] instant-celebrity for something he did 40 years ago.
Heh. This whole scenario is both ridiculous and heartwarming at the same time.
I'm just glad to see that there are so many folk interested in the technology of the era...
It's been said before, many times, many ways:
That era of computing was basically the last where one person could understand their entire computer, inside-and-out. Where its functionality could be deterministic to such extents as controlling exactly which clock-cycle would toggle a pin.
This, I think, is the level we should be introducing tomorrow's computer-engineers to, *starting* them at. The keystone of computing that still exists, but is burried under so many layers that now it's become commonplace to not even be aware of layers that are already there, so reinvent them atop the others; ever slower, ever more resource-consuming, reintroducing bugs that were squashed ages ago...
They hype it up with terms like "retro" or "vintage" or "8-bit", but I guess that's what it takes to get many folk to even bother considering understanding the machines they use, or design [for].
Some of these folk may later design self-driving cars and medical devices... with a fundamental level of understanding I think everyone who could be impacted by those systems [i.e. everyone on the road] should be grateful for.
Here's hoping.
Meanwhile, a tear-jerker as a man's forgotten efforts get some recognition four decades later.
...
Oh, btw, it's called "The Nabu Personal Computer". Heh.