-
SDRAM/DIMM Specs and sTZ
07/23/2016 at 22:08 • 0 commentsIf you haven't looked already, #sdramThingZero - 133MS/s 32-bit Logic Analyzer is pretty much the new project... turning sdramThing into a bit more of a logic-analyzer peripheral to be connected to any "host", rather than an entire system in and of itself... I intend to keep it as "transparent" as possible, e.g. using only 74xxx00-series chips, such that it could be easily-understood/reproduced as well as implemented on most any host (e.g. a Raspberry Pi through Python, or even a Commodore 64 through BASIC).
So, that'll probably be a good place to get any new info I make available on how to work with SDRAM. (Oh, and Thanks @Benchoff, for the blog-writeup on that!)
I've also added two of my go-to resources in the "Files" section of both projects... DIMM pinouts/layouts as well as SDRAM pinouts/commands, etc.
-
Single Instruction Computer - "ByteByteJump"
06/13/2016 at 06:22 • 0 commentsI think something like this might be downright almost completely-doable with the state-machine already built-into SDRAM...
http://hackaday.com/2016/06/11/designing-a-single-instruction-computer/
I haven't looked into the details too thoroughly, I think the idea is to basically implement whatever instructions you wish to use as look-up-tables in memory. Well, hey, here's a shitton of memory ;) Then there's something about jumping, which sdramThing is *really* good at...
Of course, loading those look-up-tables is another thing entirely... FLASH makes more sense, but generally slower and smaller (in the parallel-interface format).
It's a curiosity I might revisit... (in fact, by writing this, I am already revisiting the idea of turning sdramThing into a CPU... something I vaguely thought of a while-back... here's something suggesting it's not only possible, but even implemented in different forms)
-
Did planning toward ddrThing from the start bite me in the a** AGAIN?!
03/19/2016 at 11:55 • 0 commentsIt almost seems as though the SDRAM clock could be strobed whenever I so-desire... which would *REALLY* simplify the danged thing. Could probably get rid of one-shots altogether, dagnabbit!
DDR SDRAM has a "DLL" which, as I understand, is similar to a PLL... Further, the DDR chip-specs I looked up specify that operation without the DLL is not an option (even though action must be taken to enable it). The main point being: The DLL requires the clock-rate to be *stable*. In fact, it takes something like 200 stable-clock-pulses to synchronize before you can even use the DDR SDRAM... and changing the clock-frequency requires first putting the device in standby, then a resynchronization... It doesn't seem particularly easy to change the clock-frequency of DDR, and arbitrary pulse-durations seem to be out of the question.
Again, I designed sdramThing, from the start, with DDR in mind for the future... Was reading the manuals side-by-side throughout most of the design of sdramThing, and intentionally put some limits on the design based on what was possible with DDR. SDR SDRAM was proof-of-concept, at that point... since it's easier to work with, especially voltage-levels and the ability to run at the lower-clock-frequency.
Though, I have taken advantage of some SDR SDRAM features that aren't available in DDR... Mainly, the long bursts: SDR can do bursts of an entire page (usually 512 or 1024 words), whereas DDR's longest burst is EIGHT words. That... would definitely require some careful-timing when connected to something like an AVR... especially considering it's DDR (so 8 data-words is only 4 clock-cycles), and considering the minimum clock-frequency of 85MHz.
That part, the short-burst-length, is a big kicker which I have yet to wrap my head around... I'm pretty sure it'll work in Free-Running, but actually loading the DDR with those initial free-running commands is... questionable. The obvious option is a bunch of latches, but I figured I'd come up with something else somewhere down the sdramThing line, but haven't yet.
And, since I haven't achieved anything above 30MHz, yet, I figure ddrThing is still quite a ways off (and, frankly, unnecessary for most of my needs).
So, per recent log-entries, I've been contemplating another SDR feature that's not available in DDR: the ability to use CKE dang-near arbitrarily, especially during a read/write burst...
And... I've been contemplating clock-switching... E.G. load the initial commands while running off the AVR's clock, then switch over to the SDR's clock once it's loaded. This'd definitely reduce/remove the need for CKE... But, then, it occurred to me, again, that e.g. a Raspberry Pi wouldn't provide a determinable clock/instruction rate, so we're back to needing CKE.
Right... But then, it would seem from what I've read, the SDRAM's clock doesn't necessarily need to have a steady-rate in most cases. It almost seems as though, in many cases, the clock could be strobed whenever one pleases... Which could *completely* remove the need for CKE and one-shots altogether.
.... and, then there's this, which I hadn't come across in all my reading today, as a note-number that isn't even noted in the "NOTES" column of the table it's under... And somehow just happened to be centered in the PDF-window when I went to copy something else to put right here... (weird).
The clock frequency must remain constant (stable clock is defined as a signal cycling
within timing constraints specified for the clock pin) during access or precharge states
(READ, WRITE, including tWR, and PRECHARGE commands). CKE may be used to reduce
the data rate.The part I was going to *look for* is that bit in the parentheses, which is *all over* the document:
(stable clock is defined as a signal cycling within timing constraints specified for the clock pin)
And, what I was about to say, was that the only timing-constraints I can find for the clock-pin are minimum-values for clock-high and clock-low, etc. Nothing regarding a specific frequency, nor about the high/low widths needing to be any particular duty-cycle (until that friggin' note, above, popped-up).
Alright, weird. So, the gist is that CKE is necessary, again.
-------------
Wherein, it's a bit of a relief I found that, since what I've been working on all day is to see if there's a way I can emulate the one-shot circuitry *with* an SDRAM chip...
The Free-Runner uses half a DIMM, but actually only uses three of the four chips. It uses half because that's how the Chip-Selects are separated on a DIMM. So, I could plausibly pop one chip-select and reroute it for this new purpose. (Alternatively, maybe, there'd be benefit from having two DIMMs serving different purposes).
One discovery that *could* be useful: If you start it *just right* you can alternate DQM pulses with every-other read-output (e.g. in a burst)... Not exactly sure *how* it'll be useful, but it might be. E.G., maybe, to have two separate free-runners running side-by-side... plausibly running two entirely different state-machines that needn't be lock-stepped. If the first is set to only output instructions on even cycles, and the second only on odd-cycles... Could be useful.
Actually, my initial thought was that maybe I could have the "new" chip do other things, as well, such as looking for "complex-triggers", or even generating new jump-addresses based on external input for the (single) "free-runner".
It's all up-in-the-air at this point, but definitely some new ideas generated by thoughts from the Blog-Comments. Awesome!
-
Blog Mention! + sdramThingZero
03/15/2016 at 23:46 • 2 commentsAlright, first-off... a Major Thank You to @Al Williams
(I think I got it right, this time ;))
For The blog write-up to rule them all...
(Or, maybe not "them all" as in all-blog-write-ups in general, but as in a write-up [about sdramThing] to rule all write-ups about sdramThing that've been attempted (as far as I'm aware, having been the only other write-up-attempter, as far as I'm aware, and having been exceedingly wordy and likely quite confusing in all cases. That is to say, thanks for coming up with an explanation that actually makes sense, without my incessant rambling! And for attracting attention by so many, including those who understand the odd-hackery I'm quite proud of that went into this goofball-concept! Nevermind the slew of ideas-generated and resources-mentioned in the comments-section... Wait, what? I'd gotten used to knowing the HaD Blog for a different type of commentator! But this is pretty cool, too.).
So, now... Per this project's previous log-entry and despite my prior cynicism, and with a lot of great insight from others (see "previous log-entry)"... I think it's plausible, nay probable, to create #sdramThingZero - 133MS/s 32-bit Logic Analyzer... a new, GPIO-based peripheral-ish to connect to nearly anything with GPIOs (especially, maybe, a PiZero, with its display/HID abilities and huge processing-power!) based on little more than an SDRAM DIMM and some glue-logic. Possibly, probably-maybe--based on some inspiration from the blog's unusually-insightful commentators--some of this project's necessary-glue-logic (as well, maybe, as that necessary for some features newly-inspired from others amongst the afore-mentioned commentators) might actually be implementable within the very SDRAM that needs the glue-logic in the first place.
PARSE THAT!
Suffice to say... We're talking... plausibly, relieving much of the glue-logic and latches necessary for e.g. "one-shots"... We're talking, plausibly, adding new features such as complex-triggering (e.g. via serial/parallel data-pattern-matching)... and, quite-plausibly, doing-so with little more circuitry-wise than what's already available on the SDRAM DIMM, [redundantly] already...
This is getting dense and intense... but is a challenge, at least mental, that seems, at least to me, worthy of at least as much time as many of the mental "challenges" many of us convince ourselves are necessary on a daily-basis.
So... If you dig what you've seen, here, and are interested in seeing it branch-off in a newly largely-group-inspired direction that could quite-plausibly be not only usable/useful, but also repeatable with little more than a Single-Board-Computer (or microcontroller, plausibly even a PC's parallel-port) of your choosing, a PCB-order, and an old DIMM likely already in your aging collection, consider visiting/following (and even joining?) #sdramThingZero - 133MS/s 32-bit Logic Analyzer, in addition to this one!
This'll be an entirely new, and completely un-pre-thought-out path for me, as far as projects go, so... who knows what'll happen... It's plausible it'll be a *very* slow process. (It took four-five years to get to this point). It's also entirely plausible everything I'm envisioning fitting together like puzzle-pieces these past few hours is nothing more than too much coffee and delusions of grandeur. But do follow!
Or, if you find yourself creating something from any of these ideas, I'd love to hear about/link it.
Thanks again, Al Williams, for the write-up/attention!
And thanks to everyone who's given great ideas already... Let's keep 'em coming till my brain explodes(?)!
(If you dig it... I hate to ask... but I could also really use some moolah, in general these days, but this month especially.)
-
CKE? - "Well... shit."
03/09/2016 at 13:07 • 1 commentThoughts have been bouncing around regarding sdramThing over the past few months... If you hadn't noticed, it's been on hold for quite some time... its original motive-for-revisit which advanced it from 2.0 to 4.5 was actually due to my plans to build a CNC machine (maybe a PCB-router)... I didn't have any motor-driver chips, at the time, so was planning to use sdramThing as a logic-analyzer to figure out what signals to send to a motor-driver I found in an old printer. Whew. I wasn't able to get it fast enough to read those signals (whoa, 18+bit SPI at > 30MHz?!), probably due to the ratsnest of wires... But I did eventually get some motor-driver chips, and went on with my original goal of working toward the CNC machine... So, sdramThing reached a pretty decent stopping-point. Definitely functional, definitely usable, proof-of-concept regarding one-shots and separating the uC clock from the SDRAM's clock definitely proven...
But, it's been bouncing-'round the ol' noggin' again recently... thanks in part to interest from folks like @James Newton and @frankstripod. I had it in my mind that maybe, even, it could be run from a PiZero (inspired by the Adafruit PiZero contest at: https://hackaday.io/contest/9326-adafruit-pi-zero-contest, and some insight in the comments there from @nistvan.86). Running on a more-powerful processor like that would open up new doors as far as actually *processing* the data, nevermind displaying it... Got some excellent information on just how different the GPIOs on a PiZero are than those on an AVR, from @usedbytes and @Vikas V, and some others.
As it stands, sdramThing4.5 requires *precision* timing on the part of the microcontroller. For the AVR this is (comparatively) easy. Using assembly, it's possible to determine *exactly* how many CPU-cycles occur between, say, the rising-edge and falling-edge on a pin. Apparently that's not exactly possible (or at least not as easy) with more "sophisticated" processors that use things like cache, branch-prediction, MMUs, Flash-based execution vs. executing from RAM, etc. (check out some very informative commentary/explanations by @usedbytes at #Operation: Try the Pi).
SO.
There seems to be a multitude of strikes against porting sdramThing to other architectures... Oh, the other was... what about using this as an oscilloscope-shield for arduinos...? I'm not too familiar with Arduino and its libraries, so I can't vouch for what I'm about to say, but it seems like precision-timing to the extent of using Assembly, shutting off the system's timer-interrupts during the five minutes it takes to "boot" the SDRAM, and whatnot, maybe not so Arduino-friendly.
So, now, I guess there's a theme... it took me a while to piece-together. But basically: can this be accomplished on other systems, especially those where timing is imprecise...?
One thought was to go back to (or start a new branch from) sdramThing1.0, wherein this level of precision-timing is unnecessary. sdramThing1.0, though, uses *quite a few* pins... Surely more than available GPIOs on a RPi. So, then, latches and buffers... and now we're at the point where it'll take multiple cycles just to load a single data-byte to the SDRAM.
So, was about to count-out all the necessary pins, see if there's a way to reduce the pin-count, or at least see whether *either* system could possibly fit in so few GPIOs (FYI, sdramThing4.5 requires at least 23 GPIOs, does the Pi even provide that many?!)...
But then it occurred to me... CKE, the Clock-Enable input. I never used this pin for anything before... For the most-part it seems like it's intended for putting the SDRAM in "sleep" mode, but it appears that it *can* actually be used to essentially halt the SDRAM for individual clock-cycles. Huh!
So, this whole time, instead of using *multiple* one-shot circuits, I might've been able to use *just one* attached to CKE! Further, doing-so might mean that precision timing is completely unnecessary. Just set-up the command with CKE disabled, then set up your data, then enable CKE (via the one-shot) for one cycle whenever that data is ready.
Well, shit...
Still not sure about that GPIO count, but reducing the number of one-shots is quite helpful, and relieving the timing-requirements means it should be able to run on any system, even programmed in C rather'n assembly, maybe even Python.
-
It may not be as niche as I thought...
03/09/2016 at 07:48 • 0 comments10/27/23: This has been sitting as an unfinished draft since 3/9/16... 7 years. heh.
Herein, it seems is the beginnings of #sdramThingZero - 133MS/s 32-bit Logic Analyzer
...........Thanks to @Vikas V, @usedbytes, @James Newton, @frankstripod, and a few others I'm sorry I'm forgetting by-name, for bringing to light that this project may not be as... obscure...? niche...? or something... as I had thought!
In this era of *really fast* single-board computers, and 100+MHz microcontrollers like the ARM and PIC32 (which I'm becoming familiar with)... nevermind BeagleBones, Raspberry Pis, etc. with built-in memories larger than most SDRAM DIMMs... It was easy to think "sdramThing is really just a hack to make a lowly 8-bit uC do way more than might be expected"...
That's true, actually... I kinda went into it with that goal in mind... Really, the main goal was to see whether the SDRAM could control itself, and what could be done by doing-so... in which case the lowly-AVR is merely an initializer, or a boot-strapping device...?
I kinda figured maybe others might see value I hadn't predicted in this "Free-Running Mode" and take it on in their own projects, now that things like Arduino are accessible to the creative-masses beyond just engineers. But, I did kinda figure it would be a bit niche... It has its limitations, for sure... It's not quite the same as merely plugging a huge amount of RAM onto a 'lowly-AVR,' usable for any purpose one might need a huge amount of RAM for...
Instead, "Free-Running" SDRAM is really best-suited for huge(/fast) *bursts* of raw data I/O... and my creativity-level hasn't really thought of much beyond either a video-framebuffer (and a stationary-image, at that), a logic-analyzer, or an oscilloscope (being like a logic-analyzer with an ADC attached to its inputs).
I had some other non-raw-burst ideas that are a bit "out there," including things like creating complicated jump-tables that could plausibly turn Free-Running SDRAM into something a bit like a processor. E.G. certain "pages" in the memory could be certain functions, a "NOP" being the easiest, where the only thing stored in that page is a "jump" back to its beginning. A separate source (e.g. a microcontroller, or a bit of logic combined with a multiplexer) could reroute that "jump" command to another location entirely... Anyways, it's totally vague and a bit "out there".
Or, similarly, several separate data-bursts could be stored in different locations for different purposes. A simple example, from my experience with sdramThing, is the use of the framebuffer connected to an LVDS-LCD display. One possibility is to store the entire image-frame ALONG WITH the necessary horizontal/vertical porches and sync signals. In other words, every row of image-data *also* stores the horizontal porches. Then the entirety of the RAM is dumped sequentially. Another way of doing this would be to have one "page" (or group of pages) in memory devoted to the horizontal porches. Each row of image-data, then, would eventually jump back to this single "horizontal-porch" memory location, and an external multiplexer would reroute the "jump" at the end of the horizontal-porch to the next row of image-data. Thus, then, the source of that jump-reroute (e.g. a microcontroller) only needs to increment its address-routing bits once per row, and the entirety of the row-drawing itself is handled by the Free-Running SDRAM, alone.
So, that thought-process could surely be extended to other things, but I've yet to think of *what* exactly... And... I digress.
SO... new fast single-board-computers with huge amounts of RAM kinda makes sdramThing seem a bit ridiculous, even as a logic-analyzer or 'scope... BUT it appears there may still be a use...
Apparently, from what I've read/understand, the Raspberry Pi is a great example... This guy has a ton of RAM, but that RAM is not really accessible to the outside world at a low-level, nor even through its few GPIOs and DMA. So, we have a *really fast* computer, capable of doing things like HDMI video output (and not just stationary-images!), tons of RAM available for computing, nevermind tons of instruction-cycles available as well... but, the only access to high-speed data I/O (e.g. for logic-analyzing) is pretty high-level (e.g. USB, Ethernet, etc.). So, we have the GPIOs, which, it seems to me, are about the equivalent of a PC's Parallel Port... They can be accessed at the bit-level, but it's not particularly fast.
So, then, maybe something like sdramThing is still a decent idea, despite this device already having more memory than an SDRAM DIMM... We've basically got a low-speed communication to the processor, just like the case with an AVR, but can (ideally) use Free-Running SDRAM at its full-speed to (e.g.) sample a huge amount of parallel data at 133MHz... Then, maybe, load that data into the RPi at whatever rate is achievable... and have full-access to the processing-power (and video capabilities) of the RPi to process/display that data.
At some point, I suppose, sdramThing becomes something like a logic-analyzer peripheral which could be connected to whatever computing-device via whatever interface...
At some point we've got to step-back and think about the goals... What are the goals...? ... As a logic-analyzer peripheral, there're better ways to achieve this... USB2.0's pretty durn fast (though I know ziltch about how to program it), an FPGA connected to DDR3, and a bit of knowledge/time... and you're done. Connect it to any computer, RPi or desktop. Bam! I'd go so far as to say it's been done much better than I'd be able to by people much better at it than I.
...
The goals mentioned earlier... pushing the limits of what an AVR's capable,
-
Dual-Pixel Thoughts
10/23/2015 at 07:36 • 0 commentsNow that I've managed to repair a pretty-decent LCD Monitor (#Ridiculous [LCD] Display Hacks) that's been dead in my collection for nearly a decade... I'm feeling a little less stingy with the ol' 15inch monitors in my collection...
So I've some ideas...
One of those 15inchers is 1024x768, same as the one used in sdramThing, except that it's dual-pixel.
What's this mean as far as sdramThing...?
Well, it might be *difficult* to see, but it would allow for visualization of 5 of the logic-analyzer's channels rather'n 2, on the LCD.
Currently, only two channels can be "seen" on the (single-pixel) LCD, "red" and "green." (Though, 32 channels are sampled and repeated, regardless of what can be *seen* on the LCD.).
Why two? The LCD's "blue" "channel" is tied-together with the LCD-timing-signals which shouldn't change, regardless of sampling, so "blue" can't exactly (easily) be tied-together with the displaying of sampled data. This isn't a bad thing, though... It leaves that color/channel available for other purposes, such as the cursors... which are quite handy.
That said... A dual-pixel display would allow for *5* channels to be viewed simultaneously on the LCD... The first two, mentioned before, "red" and "green," would be displayed in the first, third, fifth, and other "odd" columns... The remaining three would be visible in the second, fourth, and remaining "even" columns, in red, green, and blue. The cursors could remain in the odd columns...
It's a weird concept, but not exactly useless. Weird: well, even and odd columns of pixels would be displaying different sets of samples that were all taken at the same time. The first two channels' samples in one column, the next three in the next... You can imagine how hard it would be to *see* a particular waveform/pattern in that manner. OTOH, *seeing* a waveform on this "rasta"-display is difficult anyhow. That's the whole point behind sample-and-repeat (to be fed into an oscilloscope). The LCD, basically, allows for visualizing that there is data, and allows for the cursors to be used to zoom in on that data (on the oscilloscope).
So, it seems useful, despite being hard to view. Consider how hard it is to view data anyhow... First of all, there's the fact that rather than displaying waveforms in the typical ____----____-___----___ fashion, they're displayed as basically: *** * **** . That's hard enough to view. Then there's the fact that the Red and Green channels are overlapped, and acting, roughly, like they're "transparent"... When the first channel is 1, and the second is 0, you get red. When the first channel is 0 and the second is 1 you get green... When both are 1 you get yellow. Then, take it a step further: Because the display is LVDS, it's actually visualizing *seven* samples at each pixel... (is that right? must be...). And, it's not exactly... well, it's *far from* the typical expectations of how one might merge seven samples of data into a single pixel... (typical ideas might be: average the seven samples and display that as the brightness, or maybe: turn the pixel full-on if any of the samples are 1). Instead (as I recall, and slightly simplified): It's displaying the first sample as 1/128th of the brightness of that color of that pixel, the second sample as 1/64th... that last sample displayed in that pixel is 1/2 brightness, not because it's of any more importance than the others, but just that that's how it works out.
So, considering all that, the LCD is basically just a means to select a portion of data to be zoomed in on... if/where there's data to be zoomed in on. So, then, maybe dual-pixel (pairs of columns displaying a total of 5*7 samples) makes sense...
(An easy alternative, actually, might be simply to take the 32 channels and OR them all together into the two on the original single-pixel display... hmm...)
Anyways... it's a contemplation. Another consideration, though... At 1024x768(*7, being LVDS, and *4 since the SDRAM's 32-bits wide, nevermind the display's "porches"), we're nearing the limit of a single bank of my SDRAM (the "side-kick", used for sample/repeat). BUT. If using dual-pixel, we'd actually cut in half the number of samples we can display at a time... 512x768(*7). Graphically, again, it's not a huge deal... we don't see *every* sample anyhow...
The bigger problem is that *as-implemented* it's a temporal problem... If the original single-pixel setup displayed, say, 2 seconds worth of data on the LCD, the new dual-pixel setup would only display 1 second worth.
So... I'm contemplating things... I have some ideas.
Another issue, of course, is the use of "blue" as a data-channel *and* as a cursor-channel. In the grand-scheme of things, assuming you're not so close to the display to actually see each individual pixel, the cursors would either be 50% bright-blue, or *not* blue... And sampled-data on the alternate pixels would never exceed 50% blue-brightness, either... So, I think, there should be some amount of visibility/discernation between cursor-blue and data-blue.
So, quick brainstorm...
I've been contemplating ways to select portions for viewing on the LCD... LCD-zooming, in a sense. This one would require significant hacking to sdramThing... we'd probably be at 5.0 or 5.5 by that time... Probably some additional logic and/or multiplexers... But, seems plausible. I've already got *two* (three? It's been a while) entirely different sets of instructions stored simultaneously in the "FreeRunner", why not add more for LCD-zooming? Possible.
Another possibility: Most LCDs I've worked with don't mind if you keep sending data on the current row, even after you've passed the end of its displayable row... In that way, it might be possible to send *two* full "LCD-Rows" worth of data to the display (and in "repeat" mode, to the oscilloscope) but only display one... So, some lost-data visually, but still transmitted for "ultimate-zooming" on the 'scope.
There's probably a ton of options, really... I have to call it a night, though, at least on this front.
-
on hold... + weird-reminder of "WHY"
07/03/2015 at 08:36 • 0 commentsOn-Hold... It happens... but if it weren't for the thing sitting on my workbench inches from my computer, the precariously-hand-wired PCB the cat keeps walking on, and a couple mentions/reminders from a couple people here (@frankstripod and @PointyOintment I'm looking at you!), I'd probably have forgotten about it completely.
But, I *did* completely forget *why* I revisited this project in the first place... it was sitting at v3.0 for over a year, I think... untouched... before I had a reason to use it (before posting about it here and working my way up to v4.5)... And the weird part... the "why" is sitting on the workbench right next to it. The cat walks around it on a regular basis. I have to keep reminding her not to rub her face against it, lest she get ink on her chin.
So, now I'm working on #operation: Learn The MIPS (PIC32), trying to port #commonCode (not exclusively for AVRs) to the PIC32 series (and having quite a bit of difficulty, xc32-gcc doesn't like _commonCode at all, even though _commonCode has worked *perfectly* with Numerous gcc ports for several years, including Apple's hacked version(s) in 10.5.8!)... anyways...
As usually happens (and why I never mark a project "complete"), working on a *completely unrelated project* (porting _commonCode to PIC32), I was led to search-for (and revisit) all the projects which *use* the newest version of _commonCode. Low-and-behold: "oneAxisHolder" makes use of the newest version of _commonCode... "WTF? I haven't worked on that *for years*?!" (look in my "old projects" on my profile, and you'll see "motion-control and legos" which was the original test-bed for my oneAxisHolder... 3-4ish YEARS ago).
Wait... why the heck is oneAxisHolder using the latest version of _commonCode?! That was only developed a few months ago... Oh yeah! sdramThing was being revisited because I was trying to set-up a 2-axis motion-control system with an old inkjet-printer... I used oneAxisHolder (and added RS-232 support to move its 'hold'-position via bash-script)... to test the DC-motor/strip-encoder system for the ink carriage... (and was pleasantly surprised by how precise it seems to be!). But the paper-feed axis is controlled by a stepper, and (at the time) I didn't have any stepper-drivers except the one on the printer's PCB... which isn't (publicly) documented. So... I planned to use sdramThing as a logic-analyzer to monitor the SPI data sent to the stepper-motor-controller...
WHEW.
And I was reminded of this months later not by the torn-apart printer sitting on my workbench (less than two feet from my mouse) that my cat keeps rubbing her face on... but by a completely unrelated project.
Well, that's on-hold... sdramThing's on-hold... and I completely forgot that *before* sdramThing's revisit, I revisted (and updated) "oneAxisHolder" and now it happens to be one of the most-current projects in my collection... (nice surprise)... and quite-likely one of the best examples for how to *implement* the newest version of _commonCode.
This brain...
Didn't Einstein say something like "If a cluttered desk means a cluttered mind, then what does an empty desk mean?"
Of course, that genius also said something about: "insanity is defined by doing the same thing repeatedly and expecting different results" wherein, I give *the absolute simplest* example of a contradictory-cliché: "practice makes perfect."
BTW: nearly every cliché has a completely contradictory one... isn't that Newton's Third Law?
Also, Einstein's Biography (which I couldn't get through, because frankly he's a dick) basically stated that he was a horrendous family-man, completely disregarding one(?) of his own children... yet he also has been quoted (and repeatedly-so) as saying something along the lines of basically the most important thing in life is family... (wonder how his "illegitimatized" child felt about that? Maybe there's hope in the later pages of the biography?)
But, back on topic... cluttered-desks are definitely in my existence... cluttered everything, really. Cluttered-mind, ABSOLUTELY. Seriously, it was only a few months ago I was totally ecstatic someone suggested that a linear-encoder-strip could be used for somewhat-precise motor-positioning, even with a pretty-simple P[I]D-algorithm, and found it to be precise-enough for many purposes. #PCB mill for under $10 was the initial-inspiration... and soon-thereafter there were some posts and comments on the 'blog regarding such things... and I'd every-intention of getting that blasted-printer-carcass running as an X-Y plotter to test the precision.
Lest I feel like a complete idiot, and probably along the lines of Einstein's dickishness... I dare point-out that I'm subscribed to the comment-sections on all those blog-entries, and they reached nada almost immediately, and nothing come through after the first week or two... There was some heated (and intelligent) debate in there, some of which, I admit, I was involved-in, and some in fact found myself to be *wrong* about... (what? I'm Human?!) But, maybe it doesn't really matter, because it seems the vast-majority never think about a blog-post or even a project (certainly not the comments) more than a few days after coming across it... (thoughts on being able to group project-follows in different ways...?). And most apparently don't even bother to look at the comments they're subscribed to when a new message comes through regarding something they were *heated* about just a few months ago... some, even, so-much-so that they even went out of their way to test such things and base new projects on them... like... myself. Or, maybe, like myself, they're just losing-track-of blog-entries-subscribed-to, unable to remember where to comment with updates, and busy/distracted working on whatever (project?) most-recently caught their attention...
On the plus-side, most of my projects, as seen here, feed into each other... updating _commonCode is a *tiny* example... maybe the zoomed-out view of the fractal... yet, each piece is a fractal-branch, and often they're revisited and improved months/years later. It's nice when zooming-in and zooming-out both end-up benefitting both perspectives (and those inbetween).
I feel like there's some "meta-perspective" that would suggest some connection with things *beyond* projects, but I can't quite wrap my head around it.
-
Auto CPU-Frequency Detection via WDT - Now Video!
05/06/2015 at 10:01 • 2 commentsSo, continuation of this project is going to necessitate quite a bit of flipping between various Crystal Oscillators...
(As I recall, I was last able to get it running with a 30MHz oscillator, but *not* with a 40MHz).
Also, there's a (selectable-output) frequency-divider between the oscillator and the AVR, so there will likely be *quite a bit* of frequency-switching as I try to stretch this thing's limits.
The first step was a bit 'o code on both the AVR and my PC's serial terminal emulator to automatically switch between 9600bps and 4800bps, for, e.g. when I switch between direct-oscillator connection and divide-by-two... (or divide by 2 and divide by 4). But that doesn't help when switching crystals...
Then I saw this: #April Fools' Pic Project. No kidding... @Brek Martin used his PIC's Watch-Dog Timer to detect the frequency of the external oscillator driving his PIC. Nice.
I had to see how accurate it could be... Could I use that method to get at least enough precision for 4800bps (transmission to the computer) with any reasonable clock-frequency I might encounter (in my test-environment)?
I started coding it up, got the basic proof-of-concept, then lost steam...
Then I saw this: #AVR temperature measurement without a sensor mentioned on the HaD blog. Wherein @Thomas Baum uses, again, the Watch-Dog Timer to detect apparently minor frequency-changes due to thermal drift of the RC oscillator used by the Watch-Dog Timer.
Well, thankfully, I don't need to worry *too much* about thermal-drift (or voltage-shift), as this is a prototype in a pretty stable test-environment. And, ultimately, I won't be needing this auto-frequency-detection scheme except during this phase of development... So, basically, using the WDT seems like a pretty decent "real-time" oscillator/timebase for a situation like this. Thanks to both these folks for the inspiration!
The basic idea is to run the WDT and count how many CPU-Clock cycles occur before the WDT times-out. Pretty simple, really. And looks to be quite the handy tool.
I finally got the calculations running, did a tiny bit of calibration, and got it to automatically configure the baud-rate, and the results are surprisingly accurate.
So far I've tested four crystals: 30MHz, 40MHz, 27MHz, and 16MHz. With the clock-divider, this gives me 5MHz, 6.75MHz, 7.5MHz, 8MHz, 10MHz, 13.5MHz, 15MHz, 16MHz, 20MHz. (This AVR is rated for 8MHz, but runs up to 20MHz, but no higher).
All except 8MHz resulted in something close-enough to 9600bps that the computer reads it glitch-free, all automatically-detected/configured. No kidding!
Currently, it's being tested in-system, so video wouldn't be too intelligible/convincing. I'll try throwing it on a breadboard, soon.
-
Randall Engineering "The Pocket Logic Analyzer"
04/26/2015 at 22:17 • 0 commentsSo, digging 'round my parts...
I came across this ol' thing... (in the title)
(Maybe I'll upload some pictures, or maybe I'll do a tear-down...?)
This was inherited from an old buddy's buddy who passed away...
Quite literally the *only* information on the unit itself is:
- "The Pocket Logic Analyzer"
- "32 Channel, 100MHz" (?! 100MHz ?!)
- "Randall Engineering"
- a pinout for the bit-numbers (1-32)
- a pinout for "Clk A" and "Clk B"
Inside there's no silkscreening... just "Randall Engineering" and "1992" so I don't have much to go on... Oh, also, the word "slow" written in *three* locations, including *twice* on the FPGA marked -125 (MHz?! 1992?!)
But I do have a slightly-corrupted 3.5in Floppy Disk!
And I've managed to get the application "ana.exe" running, somewhat, with mouse support!
"Pre-Release Version: Feb 4, 1995, 21:31:31"
I don't know for sure, but I have a feeling the guy I inherited this from mighta been working in cahootz with this "Randall" guy for devel...
There're some saved-waveforms, but otherwise I can't get the device communicating.
Anyways, was working on my new PIC32s, but came across this and got sidetracked (and a little surprised at the 100MHz achieved in 1992 with a handful of SRAMs rated at 25ns, when I can't seem to achieve better'n 30MHz with an SDRAM)