-
UEFI RANT
05/26/2019 at 04:42 • 17 commentsI give up, seriously!
This be another rant...
Seriously, when I was 15, I was so good at Intel/Windows/Linux systems that i was granted the highest-level admin privileges for a business with so many users MS-Outlook's contact-list system couldn't even handle them all. It, literally, stated that there were too many users to load... As I recall, that was something like 30,000.
At the time, I was mostly handling lower-level stuff; loading OS's on new machines, etc.
(SUBRANT: I FINALLY located a blutooth keyboard with an ESC key, so I can use VIM on my goddamned phone, because computers are SHIT these days, and phones are SHITTER... but that's another rant, which is coming soon to a scroll-down near you. Anyhow, this damned keyboard has [forward] DELETE as the BIG button next to "+", as opposed to BACKSPACE, which is a button in the upper-right-corner next to F12... WHAT THE HELL?! I mean, seriously, I must've found myself in hell, no? ESCAPE on my other keyboard is "HOME"... BACKSPACE on this keyboard is FORWARD-DELETE. WHAT THE HELL?!)
Alright, where was I...? OH...
So, for 30 years, computers could boot the exact same way... No joke. You could literally throw a DOS 3.2 boot-floppy from the PC/XT era in my Pentium 4's floppy drive, and... it'd boot... Throw a DOS 6.22 boot/install disk in that same machine, and you could literally install an operating-system from twenty years prior on a hard-disk made twenty years later.
Sure, there may've been a few technicalities, a few limitations... Maybe greater-than 2GB was too much to ask, but I never ran into a case where FDISK wouldn't even detect the dang drive, at all!
And, again. I worked in friggin' IT, in the level of installing friggin' operating-systems on machines of countless-vintages.
------
Now, somewhere in my college-years, when I thought I'd be getting into research instead of IT, I kinda lost-touch with the ol' computer-state-of-the-art... But, again, the state-of-the-art for the THIRTY years prior was that every new generation of [x86] computers was so much backwards-compatible with the previous generations that all you had to do was plop an older boot-disk into a newer machine and power it up...
Now, I dunno what happened in the less-than-1/4 of that intercompatibility-period that followed my leaving the field, but now I've got TWO damned x86-64 [compatible] machines which won't load WinXP. We're talking less than a decade, here... Nevermind the THREE decades DOS6.22 would load without a hitch. The THREE decades FDISK.EXE would detect your drive, whether it was MFM, IDE, UDMA, or even SATA... Heck, given a decent BIOS or BIOS-extension, DOS would even detect SCSI drives, even in the PC/XT era.
Yet, now, even with a BIOS/UEFI-configuration-setting for SATA's Legacy-mode, nary an OS-Installer-boot-disk I've access to (From WinXP to Win7) will detect the friggin' most important piece of hardware in the machine... the friggin' hard drive.
I'm sick of it.
Frankly...
That's all it boils down to. COMPUTERS ARE STUPID. And the designers, apparently, are getting exponentially stupider every year.
-
To the dude who thinks learning to program a parallel port is so passe
05/18/2019 at 05:58 • 1 commentWas to be submitted to a post on some forum... Dude was *blessed* to (still) be taking a class where low-level programming of parallel ports was taught... And was downright obnoxious about it... "Because I live in 2011."
Yeahp... this is nothing but a rant...
LOL, attitudes like these are why we have super-computers thousands of times more powerful than took us to the moon--hundreds of times more powerful than those that rendered Toy Story--in our pockets to be dropped in toilets when we're dissatisfied with what we've got, rather'n someone's finally, for godsakes, recognizing that we already have far more than enough, if we'd quit pretending we know more than the previous generation, and winding up subjecting everyone, including the previous experts, do the newly-designed square wheel, to slowly chisel those corners once again. Seriously, when computers were *dozens* of times less powerful, we had multi-level undo, undo-histories where a single step in the history could be removed or modified, and all those following still applied... when computers were *hundreds* of times less powerful, we had undo and redo (to, yahknow, undo an undo). And here in an era of super-super-computers we'd be friggin' lucky if by installing an app from a questionable source that requires opening up security holes on one of our single most important devices in our lives, we *might* get access to a CTRL key on our screen, that *might* give us the ability to *try* CTRL-Z... which we have to do since, attitudes like these decided generations' worth of experience somehow was completely worthless in this new era... wherein menus were deemed old-tech in favor of context-sensitive-menus (which were a great invention, akin to the screwdriver after the hammer, not a replacement a DIFFERENT tool), which were then deemed old-tech in favor of (what?!). Meanwhile, hundreds of CPU-Powers less than we have now also had *multitasking*... and now that we have hyperthreading and multiple-cores, we can't even have two damned windows running side-by-side, because that's too "old-school"... AT EASE mofos! (look it up).
Seriously; yeah, maybe networking isn't the best class to be teaching fundamentals like these (inport, outport)... But, you should, instead of being so friggin' cocky, take a moment to step back and say "hmm, maybe this is an opportunity to learn something". Because, frankly, no matter how much you virtualize our systems, no matter how many layers of [hardware/software] abstraction you throw at our systems, Someone, Somewhere, actually has to understand how to interface the friggin' Power/Charge LED with your multi-core ARM 2GHz processor. This is your chance to learn it, something very few are exposed to these days, and you think you're somehow above that. And, worse, completely disregard the fact that it still exists and will continue to as long as hardware and firmware exists.
And then, to top it off, you're too damned busy to be bothered to visit the lab where the systems are at your disposal *waiting* to be used... A lab where you could in fact be learning these *rare* opportunities *alongside* others, and instead, choose to figure out *extremely* difficult methods to emulate something you don't understand in the first place.
And if that ain't enough. It's folks like you, with attitudes like yours, that have made it damned-near impossible for folks with *decades* of experience, *decades* of TOOL-development/improvements to make use of the TOOLs they've developed over decades... Why? Because y'all don't give a flying rat's ass about backwards-compatibility, even when someone like me comes around and shoves it in your face that it's not *backwards* compatibility *at all*, but LOW LEVEL compatibility, which, again, will exist as long an an LED or pushbutton needs to be interfaced with any sort of software.
'cause some asshole like you will be a manager somewhere some day saying "why are we bothering with keyboard scancodes, when all our keyboards are USB or bluetooth?" completely forgetting that IN that USB or bluetooth keyboard is A PROCESSOR, a processor interacting with pushbuttons and LEDs at the level of INP and OUTP.
I want my multitasking back. Fuck your multi-threading within a single application. I WANTED multitasking *AND* multi-threading. So much for that, eh? I want my Undo back, fuck your lack thereof. And I wanted to see further progress in mult-level history-based undo. I want my Context-sensitive menus AND my menu-bar with multiple menus (File, Edit, View...), because once we had only a hammer, and it was a pretty good hammer, but then we had both the hammer and the screwdriver, and things were looking up... Then some asshole decided to get rid of the hammer altogether... Then some later asshole basically decided to get rid of screwdrivers, too... "because, it's 2010, yo!"
And, again, for a brief period we had both touchscreens *and* penabled computers... We literally had pens that could achieve sub-pixel resolution on our laptops... could detect the tilt/angle of the pen, the pressure, and still do-so at fractional-pixel resolutions on laptops also containing touchscreens for those moments when 20-30pixels' worth of resolution doesn't really matter (like, when? Like when they decided a friggin' Icon needed to be 200 pixels wide to fit our fat fingers!). Then not only did they get rid of the multi-axis pressure-sensitive sub-pixel pen, then they did away with the friggin' MOUSE! And not only did they do away with the mouse/touchpad for fine-control, but they did away with the danged arrow-keys! So now, when you touch, with your fat finger, a piece of text to edit, and it puts the cursor two lines up and three characters over... there's nothing you can do except, again, install some shady-ass software to replace your onscreen-already-too-small-finger-touch-keyboard with one even smaller just to have arrow-keys!
But, yahknow, "get with it, it's 2019!"
(Oh yeah, we once had a middle-button, too... It was QUITE HANDY. And, like the right-button, we could actually BOTH click AND drag with it... but y'all of your generation wouldn't know that).
So, yahknow, if you want to pretend like direct port-access is "so 1980", completely disregarding its importance to those fancy-ass blue LEDs in your case-mod, or the friggin' keys on your keyboard, then you're more than welcome to run At Ease. But, forgodsakes, please don't develop software for future generations, or worse, past.LOL, attitudes like these are why we have super-computers thousands of times more powerful than took us to the moon--hundreds of times more powerful than those that rendered Toy Story--in our pockets to be dropped in toilets when we're dissatisfied with what we've got, rather'n someone's finally, for godsakes, recognizing that we already have far more than enough, if we'd quit pretending we know more than the previous generation, and winding up subjecting everyone, including the previous experts, do the newly-designed square wheel, to slowly chisel those corners once again. Seriously, when computers were *dozens* of times less powerful, we had multi-level undo, undo-histories where a single step in the history could be removed or modified, and all those following still applied... when computers were *hundreds* of times less powerful, we had undo and redo (to, yahknow, undo an undo). And here in an era of super-super-computers we'd be friggin' lucky if by installing an app from a questionable source that requires opening up security holes on one of our single most important devices in our lives, we *might* get access to a CTRL key on our screen, that *might* give us the ability to *try* CTRL-Z... which we have to do since, attitudes like these decided generations' worth of experience somehow was completely worthless in this new era... wherein menus were deemed old-tech in favor of context-sensitive-menus (which were a great invention, akin to the screwdriver after the hammer, not a replacement a DIFFERENT tool), which were then deemed old-tech in favor of (what?!). Meanwhile, hundreds of CPU-Powers less than we have now also had *multitasking*... and now that we have hyperthreading and multiple-cores, we can't even have two damned windows running side-by-side, because that's too "old-school"... AT EASE mofos! (look it up).
Seriously; yeah, maybe networking isn't the best class to be teaching fundamentals like these (inport, outport)... But, you should, instead of being so friggin' cocky, take a moment to step back and say "hmm, maybe this is an opportunity to learn something". Because, frankly, no matter how much you virtualize our systems, no matter how many layers of [hardware/software] abstraction you throw at our systems, Someone, Somewhere, actually has to understand how to interface the friggin' Power/Charge LED with your multi-core ARM 2GHz processor. This is your chance to learn it, something very few are exposed to these days, and you think you're somehow above that. And, worse, completely disregard the fact that it still exists and will continue to as long as hardware and firmware exists.
And then, to top it off, you're too damned busy to be bothered to visit the lab where the systems are at your disposal *waiting* to be used... A lab where you could in fact be learning these *rare* opportunities *alongside* others, and instead, choose to figure out *extremely* difficult methods to emulate something you don't understand in the first place.
And if that ain't enough. It's folks like you, with attitudes like yours, that have made it damned-near impossible for folks with *decades* of experience, *decades* of TOOL-development/improvements to make use of the TOOLs they've developed over decades... Why? Because y'all don't give a flying rat's ass about backwards-compatibility, even when someone like me comes around and shoves it in your face that it's not *backwards* compatibility *at all*, but LOW LEVEL compatibility, which, again, will exist as long an an LED or pushbutton needs to be interfaced with any sort of software.
'cause some asshole like you will be a manager somewhere some day saying "why are we bothering with keyboard scancodes, when all our keyboards are USB or bluetooth?" completely forgetting that IN that USB or bluetooth keyboard is A PROCESSOR, a processor interacting with pushbuttons and LEDs at the level of INP and OUTP.
I want my multitasking back. Fuck your multi-threading within a single application. I WANTED multitasking *AND* multi-threading. So much for that, eh? I want my Undo back, fuck your lack thereof. And I wanted to see further progress in mult-level history-based undo. I want my Context-sensitive menus AND my menu-bar with multiple menus (File, Edit, View...), because once we had only a hammer, and it was a pretty good hammer, but then we had both the hammer and the screwdriver, and things were looking up... Then some asshole decided to get rid of the hammer altogether... Then some later asshole basically decided to get rid of screwdrivers, too... "because, it's 2010, yo!"
And, again, for a brief period we had both touchscreens *and* penabled computers... We literally had pens that could achieve sub-pixel resolution on our laptops... could detect the tilt/angle of the pen, the pressure, and still do-so at fractional-pixel resolutions on laptops also containing touchscreens for those moments when 20-30pixels' worth of resolution doesn't really matter (like, when? Like when they decided a friggin' Icon needed to be 200 pixels wide to fit our fat fingers!). Then not only did they get rid of the multi-axis pressure-sensitive sub-pixel pen, then they did away with the friggin' MOUSE! And not only did they do away with the mouse/touchpad for fine-control, but they did away with the danged arrow-keys! So now, when you touch, with your fat finger, a piece of text to edit, and it puts the cursor two lines up and three characters over... there's nothing you can do except, again, install some shady-ass software to replace your onscreen-already-too-small-finger-touch-keyboard with one even smaller just to have arrow-keys!
But, yahknow, "get with it, it's 2019!"
(Oh yeah, we once had a middle-button, too... It was QUITE HANDY. And, like the right-button, we could actually BOTH click AND drag with it... but y'all of your generation wouldn't know that).
So, yahknow, if you want to pretend like direct port-access is "so 1980", completely disregarding its importance to those fancy-ass blue LEDs in your case-mod, or the friggin' keys on your keyboard, then you're more than welcome to run At Ease. But, forgodsakes, please don't develop software for future generations, or worse, past.Meh... I found a quirk, here... apparently it pasted twice. But I'll be damned if I'mma try to use my fat finger to select the duplicate.
-
The ridonculous era of overscan and hdmi
05/14/2019 at 03:51 • 1 commentThis is a rant, that's all it is...
Seriously...
So here's the deal... TV's have historically displayed the image in such a way that the edges would be covered by the TV's housing. This, historically, was because, historically, TV-signals weren't perfectly-synchronized at the very beginning of each scan-line... so the [left] edge was pretty ugly. (and maybe the right, as well). Also, historically, people kinda preferred the "rounded" look of their old TV-sets, which meant that quite a bit of the once-square (by-design, not in reality) transmitted-image was hidden at the corners by, again, the TV's housing.
Frankly, it didn't really matter, back then, if you lost a tiny bit of the televised image, it was way better than looking at garbage on the edges.
OK, Great... But we live in an era of LCDs, Pixels, HDMI, etc. Frankly, in this era, I consider a display-purchase in terms of price-per-pixel. I sure-as-heck don't want my pixels to go wasted! (or, frankly, worse: mangled!)
We also live in an era where "native-resolution" is *considerably* sharper than any scaled-resolution, even with the fancy new intra-pixel-anti-aliasing (or whatever the new buzz-word may be) available today... The fact is, there are X horizontal pixels, and Y vertical, on today's screens.
Note that this varies *dramatically* from the old-school display-technologies, *cough* CRTs *cough*... which, frankly, were quite a bit more "analog". As long as the circuitry could handle it, it wouldn't appear much different if you displayed 300 rows or 350 rows... it'd just scale the image to the available screen.
But, now, our era is such that displaying 1080 rows on an allegedly 1080-row display means first upscaling the image from 1080 to, say, 1120, then cropping off the upper 20 and lower 20 rows. This is called "overscanning." The idea is to mimic the old TV-technology of displaying stuff outside the TV's housing, so you won't see the "garbage" at the edges. Except, yahknow, we're in a digital era; if there *is* any "garbage" at those edges, it's because it was *placed* there, intentionally (e.g. for a while, there, that "garbage" in the end of the analog-era contained things like closed-captioning). Regardless, in the digital-era, it's *completely* irrelevant, and merely exists as a "feature" that most people *do not want*.
There's a bunch of math involved, but basically it boils down to a *much* less-crisp image, because whereas *without* this artificial "overscanning" *every* pixel transmitted (from the TV-station, or the video-file, or the *cough* COMPUTER *cough*) corresponded to a single pixel on the screen itself, instead, now, we have pixels which are one and some-fraction tall/wide. And how do we display some-fraction of a pixel? By anti-aliasing, or "intra-pixel" goofiness, which may [or may not] be so sophisticated as to consider each "pixel" and the physical position of its red, green, and blue "subpixels," and somehow creating a fractional-pixel (in which direction{s}?) by using a subset of RGB that may, in some case turn out to be GBR... [which, in fact, when displaying black text on a white background may appear as, e.g., Red-shift at the edges, but that's another topic. Although, now that I think about it, it's kinda ironic that in this digital era, we're essentially experiencing the bane of the analog-display-technologies *again*, e.g. NTSC="Never The Same Color," along with the whole reason people once preferred monochrome "hercules" displays over color CGA for text-editting, and more. We're experiencing it again! History Repeats! DESPITE THE TECHNOLOGY NOT TO!).
Regardless, what it boils down to is that even when displaying a 1920x1080 image via HDMI (which is inherently digital) on a 1080p-native display, what you're really viewing is something more like 1880x1040 stretched across 1920x1080 pixels. The 20-pixel "border" on *all* sides is completely non-displayed, aka. "invisible". And each of the *visible* transmitted-pixels is something like 1.002 pixels on the displayed-image (mathing not actually done to come up with that number).
Now... I'm sure this is a *common* irritation, The Great Goog will tell you such... Just search for "overscan hdmi."
The VAST MAJORITY of results on how to relieve this problem are to adjust the settings on the TV itself. Makes a heck of a lot of sense... But somehow, apparently, there slipped, even in the 1080p-era, displays which *cannot* disable overscanning. I happen to have one.
----
So, let's look at this situation differently...
I'm trying to connect a computer to my TV, which allegedly has HDMI (an inherently digital interface, which inherently sends *every* pixel, as a unique entity), allegedly has 1920x1080 pixels. Yet, I cannot see my taskbar, and when I maximize a window, cannot see either of the scroll-bars.
Is it that my display is actually less than 1920x1080? Or is it scaling up my 1920x1080 image to something, again, like 1960x1120, then cropping every edge?
Again, The Great Goog (and all the searched forums) insists that the solution is to turn off overscan *on the TV*... But somehow, I've managed to come across one of the few that don't have any option like that (e.g. "pixel-for-pixel" etc.). This may be why this was only $30 at Goodwill.
But, yahknow what? RANDR 1.4 (TODO: LINK from xorg, yahnow, RANDR, which 'xrandr' gives access to, from the command-line) explicitly states that one of its new features is to *eliminate* such borders, explicitly for such televisions... (TODO: Friggin' copy-paste that quote. Seriously).
Thing is... HOW is it done...?
(and, by the way, it *isn't* done with my setup... because apparently my vid-card's driver doesn't offer that capability).
So, to recap: My TV overscans, inherently. Something like 20+ pixels of the image are lost at every edge. It's *plausible* my TV cheated, and isn't actually 1920x1080, but more like 1880x1040... It's equally-plausible it's actually 1920x1080, but uses scaling to acheive this overscan "feature". Regardless, I can't see my friggin' taskbar nor scroll-bars.
There are a few solutions on the 'net, involving using xrandr and --translate, or --scale, but, frankly, those don't work with my setup.
And, frankly, I haven't come up with a reasonable solution, yet. The *obvious* solution involves using the RANDR 1.4 "border" technique, but, again, my vid-card doesn't support it.
But, again, the bigger question is *HOW* it achieves this...
WHATEVER solution I come up with will result in fractional (IOW UGLY!) pixels. Unless, maybe, my display isn't actually 1920x1080, but something smaller, in which case, I might be able to force some "border" or "translation" which results in a *native* resolution that matches the display.
Frankly, I don't really care...
What I do care about is *how* this is achieved...
If I understand-correctly, at the digital-level, EVERY PIXEL is transmitted *uniquely* via HDMI... So, if xrandr is somehow capable of sending a 1920x1080 *timing* to the TV, while squeezing (allegedly) 1920x1080 pixels into the TV's *NON*-bordered space, then, technically, xrandr would be attempting to squeeze more pixels onto the display than it physically can handle. So, now, we've got a 1920x1080 image squeezed into 1880x1040 pixel-space, which, again, is scaled-up to 1920x1080 on the display itself.
BTW: each time a scaling is performed, the image gets blurrier, but that's another topic.
I'd Much Rather: Know my display's *native* resolution, and work with that... But, again, let's say it's 1920x1080, and overscanning is inherent, then the only way to display 1920x1080 actual pixels is to transmit a slightly higher resolution, which will be overscanned...
Here's where things get really stupid.
I mean STUPID.
This TV (and surely most) has already demonstrated that it's more than capable of scaling... Send a 728p image, and it'll upscale to (allegedly) 1080p. Send a 1080p image, and it'll upscale a few pixels off-screeen on every edge.
So, the logical conclusion is to send, e.g. a 1980x1140 image, with borders, and the screen should scale that down to 1920x1080 *with borders* and all will be right with the world.
But No!
Displays like these seem to expect "standard" resolutions!
Wait, what?!
The dang thing can scale and scale again, but it can't handle an un-programmed resolution?!
I mean, what're we talking about, here...?
Mathematically, *every* resolution imaginable could be handled with a simple math equation... So, what'd they do, program a shitton of if-then or "switch" statements for each *expected* resolution? WTH?!
I might expect this of old displays... ala the VGA era, wherein the display itself had to determine where pixel-data started and ended... But this is the HDMI era...
I dunno how much y'all know about HDMI, so I'll tell you: HDMI carries a "Data Enable" signal. When that signal is active, there is active pixel-data. This is KEY.
Back when our LCD displays had to accept VGA input, they had to *scan* the input-signal to try to detect where pixel-data began and ended. That's because... well, VGA didn't send that information. It was *expected* that your CRT would display a black border (where pixels could've been displayed) before the actual pixels came-through.
Note: This is the *Opposite* of the old Televisions, which, again, displayed the image *before* it was visible on the CRT...
BECAUSE: EVEN IN THE VGA ERA (again, still analog), We Were Capable Of Aligning Pixels Vertically!
But NOW, In the HDMI ERA, we're pretending we're not capable of that, and "overscanning" to compensate for some imaginary problem DECADES-resolved.
Meanwhile, THINGS ARE EASIER NOW. We actually TRANSMIT a signal that says "PIXEL DATA EXISTS HERE." And, yet, we ignore it.
-------------------
THIS is where I get UTTERLY PISSED OFF....
This display--and it's surely not the only one, otherwise RANDR 1.4 wouldn't make mention of it as its *first* major change--is fully-capable of scaling up AND scaling down, AND pushing those scalings out of its range... yet it can't handle a simple input-resolution that's *slightly** out of its expected range? RIDICULOUS.
Again, we're talking about a tremendous amount of *MATH* involved in its upscaling/downscaling, yet its *reception* uses nothing but "switch" statements, *despite* being *TOLD* when pixel-data is available.
Beyond Belief.
....
Man, I wish I could come up with an analogy...
For some reason Weird Al's song "This song is just six words long" comes to mind.
.
I mean... the thing is... ALL the computation-power is there. In fact, the friggin' libraries are there. There's *nothing* stopping it, except software.
This is the godforsaken era in which we live...
On the plus-side. I got a 1080p 30-something-inch display for $29 at the thrift-store... I can deal with it.
....
On that note: My old 1600x1024 SGI display required a similar hack... Oddly, that one was *easier* because it used VGA (analog) rather than HDMI...
Lacking a "Data Enable" signal, a controller receiving VGA has to *detect* the pixel-data. The trick, there, was to initially send a standard signal (1600x1200, as I recall), synchronize the display, then switch to 1600x1024 with otherwise the exact same timings.... (so extend the vertical porches by 1200-1024=176). The *display* thought it was receiving 1600x1200, and I was able to stretch it, via configuration options, to fill the screen.
The irony, now, being that HDMI actually *sends* the "data enable" signal, telling *exactly* where the pixel-data is, yet the display can't handle it because it's *unexpected*, despite the fact that *MATH*, the same dang math it does for its weird-ass scaling, is apparently beyond it.
--------
I probably had some other intent with this rant... I don't recall...
xrandr, it may help...
Frankly, if I wound up with 1880x1040 resolution and it actually matched-up pixel-for-pixel, that'd be great. As it stands, it seems downright ridiculous to use 1280x1024 on this display. And, doesn't matter anyhow because it *still* overscans that shizzle.
Here's what it really boils down to....
I need to rant, and this particular thing is something I don't particularly care about.
Computers are stupid.
-
Oldish Sony Vaio Laptop Sound Redirection
01/10/2017 at 09:39 • 0 commentsThis is for a Sony Vaio laptop, marked:
PCG-7K1L
The default Debian Jessie + Mate configuration for this guy's sound-card is such that the only place one can get sound-output is through the A/V output connector. Not through the speakers, not through the headphone-jack.
VGN-FJ290(Interestingly, I didn't realize this was an A/V connector... I thought it was a line-out jack... BUT, it only output audio on the left channel, and sometimes the right channel would be noise, sounding a bit like 60Hz)
But I wanted left/right, and speakers would be nice... so I did some searching.
Most results said I need to recompile the kernel, some said you need to do some really gnarly stuff with alsa... But this is an old system, surely linux support for it should've been somewhat functional for some time...
Finally I came across:
https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/1562396
Which, if you scroll to the bottom describes the utility: "hdajackretask"
Turns out, the only problem is that the BIOS isn't reporting (or the system isn't detecting) the output-mapping correctly.
So, run "hdajackretask" and change "Pin 0x10" to "Line Out"
Note that it does NOT work to change it to "Internal Speaker"
Here are my notes:
# Realtek ALC260 # DEFAULTS: # 0x0f "Green Headphone, Rear side" -> Headphone # 0x10 "Internal Line Out, Mobile-In" -> <not selected> # 0x12 "Pink Mic, Rear side" -> Microphone # 0x13 "Internal Mic, Mobile-In" -> Internal mic # PHYSICALLY: # Yellow (Composite?!) # Red Mic-in # Black Headphones # # Default settings results in sound ONLY going to yellow jack! # (Left-headphone only) # (Often results in noise, 60Hz-ish, on right) # OVERRIDE ATTEMPTS: # 0x10 -> "Headphone" -> Speakers + Yellow getting output # -> "Internal Speaker" -> Yellow only # -> "Internal Speaker (LFE)" -> Same # -> "Internal Speaker (Back)" -> Same # vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv # -> "Line out (Front) # Speakers + Yellow # Headphone (Black) Overrides Speakers # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ # BEWARE: # Mate's Sound Preferences # -> Hardware # -> Test Speakers # NO EFFECT with Tests....
-
command-line utilities DIFFER from BSD! What To Do???
12/03/2016 at 19:31 • 0 commentsUpdate 12-4-16: some ideas how to be "safe" (at the bottom)
Whoops...
And here I thought I wrote all those scripts on my old Mac such that they'd be compatible with any unixish system I might use in the future...
OK, so now I've got some learnin' to do... and not because I was planning on working *on* a script, but because I was planning to *use* a script I'd written long ago... roughly... just to make a friggin' backup for a completely unrelated project, so I could actually *work* on that completely unrelated project.
And, now, I'm a bit fearful maybe a bunch of such scripts I've been using for years may have had unexpected--and more importantly unnoticed--results on my comparatively-new linux systems... (in-use for about two years now).
----------
Today's discovery: "stat" exists on both systems, but have entirely different arguments.
On my Mac, years ago, I wrote a script containing the following:
stat -t "%Y-%m-%d_%H.%M.%S,$timeArg" -f "%S$timeArg" "$file"
This outputs a path/filename-safe string indicating the (selectable) modification/creation/etc. date of a file, in a human-readable format... Yahknow,2016-12-3_11.14.35,m
('m' indicating that I used the modification-date).
(Since, yahknow, e.g. when copying files, or worse moving files from a broken-down system to a new system, sometimes you forget to do things like preserve creation-times, or worse the different systems don't actually track the various times of the files in the same manner/at all...)
That same 'stat' line runs fine on a linux system, too... but with *ENTIRELY* different results.
Now I've a "time-string" in my backup-filename that looks like:
[input-filename] 3c83fbf27aaf3d25 255 ef53 4096 4096 59030720 4746332 1741968 15007744 13102124
where, again, it should be:
2016-12-3_11.14.35,m
'man stat' on the (now dead) OSX apparently gives the following:
https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man1/stat.1.html
which differs somewhat dramatically from 'man stat' on the linux machine I'm using.
Alright, now I'm fearful... what other utilities have I been making use of that have different arguments? What kind of unnoticed consequences have been percolating over the years...?
------------
And, since I wrote these things already for BSD, apparently, I don't really want to *rewrite* them for linux... so, I suppose, how can I somehow safely detect which system it's running on, and reformat the calls accordingly? (And... am I going to have to worry about them changing the arguments in newer versions?)
OY! That's a LOT of work ahead to make use of tools I've already been using for years.
.........
dammit, all I wanted to do was make a backup of an existing/functional project so I could get to work on improving it!
-----------
Update 12-4-16:
I think the solution is not to use checks of which *system* I'm using, but to use checks of which *utility* I'm calling.
E.G. For 'stat' on linux there's a --version argument...
So, instead of calling 'stat' directly, maybe an intermediate script that calls 'stat' that runs something like:
statVerString="stat (GNU coreutils) 8.23" blah=`stat --version | grep "$statVerString"` if [ "$?" != "0" ] then echo "$0 is only tested with: '$statVerString'" echo "Verify compatible arguments with 'stat $statArgs'" exit 1 fi string=`stat $statArgs` if [ "$?" != "0" ] then echo "'stat $statArgs' failed with:" echo "$string" exit 1 fi
Then, I guess, next time I move my scripts to a new system (hey, what about something like busybox which, as I recall, has *entirely* different arguments for the sake of minimizing things) at least it'll warn me to verify the arguments will work before executing with possibly completely unnoticed consequences.
And, I suppose, it would be wise to include a detailed listing of what each argument does in the comments... since... like last time, I didn't have the option to run 'man stat' on the system it was developed on, since that system bit the dust.
And... doing this as a secondary-script e.g. "callStat.sh" would make it easy to fix one time for all my other scripts which call stat.
My biggest fear is all those scripts using 'cp' 'rm' and 'mv'... if different systems have different arguments, am I 'bouts to essentially 'rm -rf /'?
so... "callRm.sh" I guess is coming up soon.
Oh... and the fun-bit... apparently linux-stat doesn't have the same time-output-format options as bsd-stat... so that probably means I'll have to use something like 'sed' to rearrange the data-output. So now I'll need "callSed.sh" as well? WEEEE!
Note, I used blah=`...` rather than ... >/dev/null... I mean, if one can't rely on utilities of the same name to have the same arguments, how could one possibly rely on all systems to have a /dev/null? (cygwin?)
ah heck... I might as well just never learn a specific machine's arguments, and instead just always copy these scripts over, first-thing, then have 'em remap the arguments I'm comfortable with to those on whichever system I'm using... WEEE! I already have mmv, mcp, and mrm for similar reasons. mstat, mrsync, meverything! WEEE!!!
Maybe if I were more-mobile in the computing-realm, I'd do something like throw all these in a combo memory-stick/password-key. (Isn't that kinda what smart-cards were supposed to do?)
-
geda-pcb and Debian Jessie
11/22/2016 at 00:43 • 0 commentsWorked-ish... but had issues...
E.G. Running a Design Rules Check would find errors, but in the window where it shows a thumbnail of the error, the thumbnail would be little more than the background-color.
E.G.2. Double-clicking the error is supposed to send your cursor to the location of the error, and highlight the offending thingamajigs, instead it appeared to always move the cursor to the middle of the top of the board... Nothing to see, no offending-thingamajig-coloration.
(Maybe somehow related to its originally having been built with the assumption I was using GNOME3, but downgraded because my second video-card doesn't support xrandr, or something?)
----------A fresh build has fixed these problems...
$ git clone git://git.geda-project.org/pcb.git $ cd pcb $ ./autogen.sh .... ./autogen.sh: 1: ./autogen.sh: intltoolize: not found
As I recall, I did an install of 'intltool' to fix this.
$ ./autogen.sh ... $ ./configure --enable-gl --enable-dbus=no --with-x --disable-doc
--disable-doc because apparently texlive isn't installed and is a 300+MB download
Not sure about dbus, but it complained about its missing. My MacOS system used it, but apparently it's not required, so whatevs.
--enable-gl and --with-x because... well it's graphics, and the problem was graphics-related... and it worked this way on my MacOS 10.5.8 machine, way-back-when
You'll likely have to run ./configure *several* times, as many of the dependencies are not already installed...Thankfully the messages are quite-informative, Search-fu may help, if you can't figure out what you need... but in most cases you need '<whatever>-dev' E.G. The latest (and amongst the least-intuitive) was:
Cannot find gdlib-config. Make sure it is installed and in your PATH. gdlib-config is part of the GD library available from www.boutell.com/gd. This is needed for the png HID. I will look for libgd anyway and maybe you will get lucky. checking for main in -lgd... no configure: error: You have requested gcode, nelma, or png HIDs but -lgd could not be found
As I recall, I did an install of libgd-dev... yeah it differs in name quite a bit, but I think you can get the drift.
If not, feel free to ask.
$ make $ sudo make install
Bam!---------
There were a bunch of other quirks, too... As I recall, the keys didn't do what they were supposed to, e.g. "u" should undo, but it didn't. Now it does.
-------
And, for a moment-of-rant...
I find it hilarious that I can move around a PCB with all these renderings, zooming, and scrolling, on my second-monitor with nary a speed-issue... Yet I can't scroll a small listing of files in a file-browser window without at least half-a-second between each refresh. Nor spin my scroll-wheel five ticks in a friggin' word-processor document without getting caught in literally *seconds* of refreshes.
We're not talking about a friggin' ancient single frame-buffer ISA video-card from the Win3.1 era, here... (And, even if we were, as I recall ISA/VESA graphics-cards could be updated fast enough for full-screen *movies* back then). This was a top of the line *industry-grade* (not *consumer-grade*) 3D-renderer of its time, 32MB should be *plenty* for these things.
</rant>
<rant 2>
No, I will not while away my time learning a new PCB-application that changes every three weeks. That's utterly ridiculous. This one has worked fine for me for nearly a decade.
</rant>
-
USB Parallel Port adapter, low-level coding usblp.c
08/06/2016 at 21:39 • 16 commentsUpdate Jan. 2019: Apologies, this was written with #sdramThingZero - 133MS/s 32-bit Logic Analyzer in-mind, and therefore is *way* overcomplicated for most projects. If you just want to write some outputs (blink LEDs, control a text-LCD, etc.) please check out the link in the comments. You could probably control a 7-segment LED with a little wiring and "echo x > /dev/usb/lp0". Input, however, may be a little more difficult, I don't know.
However, brief note as to the main point, here... the parallel port may've been scanned by linux for attached-devices, which may leave your parallel port in an unknown mode.
This is about checking and changing the parallel port mode.
(Similar to setting the baud-rate of a serial port.)
--------
So, you've got a USB-to-Parallel adapter that you'd like to use for a project (not a printer)...
(Hey, I'm no expert, here... so take this as the ramblings of a newb):
First: There are Major Limitations... These adapters are *nothing* like the ol' Parallel Port built into old motherboards or on ISA cards... So, don't be getting high-hopes about bit-banging this thing. (There's lots of info about this all 'round the interwebs).
BUT: If your project *can* work within those limitations, then you'll first need to figure out how to set the adapter into the right mode...
So, what're those limitations?
I'm no expert, here, but it seems there are three "modes" these things can work in...
- Unidirectional
- EPP/ECP compatible
- IEEE-1284 compatible
1: Unidirectional -- Might work for your project... If *timing* isn't particularly-sensitive, you should have access to the 8-data-bits as outputs, and 3 control lines as inputs (if I understand correctly)... It could probably be done-ish... but nothing like the level of functionality the ol' ISA cards had.
3. IEEE-1284 -- This guy looks pretty gnarly and basically requires a microcontroller on the "peripheral"-side to handle the protocol. There ain't much to bit-bang with this method, unless, maybe, you're a truly-1337 haxor.
2. EPP/ECP -- This is the one I'm looking at.
So this guy doesn't look *so* complicated. And, in fact, it looks like it might be darn-near exactly the sort of interface I need for #sdramThingZero - 133MS/s 32-bit Logic Analyzer, and really not that much different from, say, the interface on an HD44780 Character-LCD.
The basic idea is to use the Data I/Os bidirectionally. This gets a bit more complicated with the whole USB thing, since you've got to send a request for a certain number of *reads* or *writes*... which means, for *reads*, your peripheral has to be able to respond with acknowledgements... (I think I can get around that with an OR gate, in my case)... and has to set-up the next byte to be transmitted, etc.
Anyways, sidetracked. The point is, when you plug in the USB-parallel adapter, it doesn't look to the computer like merely a parallel port (like a USB-serial adapter looks like nothing more than a serial port)... instead, Linux tries to look for a *printer* connected to that parallel port.
In a high-level sense, it makes sense... it's just another bus; parallel, USB, PCI, SCSI, whatever...
But that means it's got a procedure for looking for attached devices, and when it can't find one, it might just leave your new "parallel port" in a mode you don't want...
So, there're some settings you might want to send your new "parallel port"... but how do you do that when all you've got is /dev/usb/lp0, fopen(), fclose(), fread(), and fwrite()...? Yeah, no... ioctl() is the way to go.
That's, basically, the same thing used for e.g. setting a serial port's baud-rate... you can't do that by just sending some ASCII characters to /dev/ttyUSB0, you have to use ioctl().
(or use a utility like stty or setserial, which calls ioctl() as-appropriate. I have yet to find a similar utility for parallel-ports.)
OK, then... But this USB-adapter needs a mode-setting, and with my search-fu there's basically *Zero* documentation on the IOCTL arguments available to USB dongles.
SO, you've gotta look into <Kernel Source>/drivers/usb/class/usblp.c and, sure-nough, there are some IOCTLs listed. Awesome.
Then you'll discover that there's no Header File for usblp... And, again, it seems no one actually uses these IOCTLs, so how do *We* use them?
Yeahp, you gotta copy a bunch of #define's from usblp.c to your own code (rather than #including some standard-ish system header-file). Don't believe me? Take a look at http://hpoj.sourceforge.net/ which is the *only* thing I could find through all of the google (and actually didn't find it through there) that used these IOCTLs... And, lo-and-behold, that's exactly what they did (copied those #defines into their code).
So, it's a bit of work, but it does appear to work...
Here's the goods... (and the bads...)
//Pondering using a USB-to-Parallel adaptor //They seem to have three modes: // One-directional // EPP/ECP // IEEE-1284 // One-directional likely won't work // (Would with an *internal* and bit-banging) // IEEE-1284 looks a bit too complicated for my needs // // EPP/ECP looks to be the way to go... // Bidirectional, using the same 8-bit data lines // Strobe, etc... //First we need to make sure it's *possible* to set it in EPP/ECP mode! #include <stdio.h> #include <sys/ioctl.h> //#include <linux/drivers/usb/class/usblp.h> //ALL THESE for open()?! #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #include <string.h> //strerror #if 0 #ifndef IOCNR_GET_PROTOCOLS #error "YEP" #endif #endif int onError(char *location) { if(errno) { printf("%s: Error %d, '%s'\n", location, errno, strerror(errno)); return 1; } else return 0; } int printerPort_FD; //These are defined in drivers/usb/class/usblp.c // But there doesn't appear to be an associated header-file! //This similar manual-entry (rather'n #including) is done in hpoj-0.91 #define IOCNR_GET_DEVICE_ID 1 #define IOCNR_GET_PROTOCOLS 2 #define IOCNR_SET_PROTOCOL 3 #define IOCNR_HP_SET_CHANNEL 4 #define IOCNR_GET_BUS_ADDRESS 5 #define IOCNR_GET_VID_PID 6 #define IOCNR_SOFT_RESET 7 /* Get device_id string: */ #define LPIOC_GET_DEVICE_ID(len) _IOC(_IOC_READ, 'P', IOCNR_GET_DEVICE_ID, len) /* The following ioctls were added for http://hpoj.sourceforge.net: */ /* Get two-int array: * [0]=current protocol (1=7/1/1, 2=7/1/2, 3=7/1/3), * [1]=supported protocol mask (mask&(1<<n)!=0 means 7/1/n supported): */ #define LPIOC_GET_PROTOCOLS(len) _IOC(_IOC_READ, 'P', IOCNR_GET_PROTOCOLS, len) /* Set protocol (arg: 1=7/1/1, 2=7/1/2, 3=7/1/3): */ #define LPIOC_SET_PROTOCOL _IOC(_IOC_WRITE, 'P', IOCNR_SET_PROTOCOL, 0) void getProtos(void) { //Huh, isn't this architecture-dependent...? int twoInts[2]; //WHOA DIGITY: //Bug in hpoj...? //int twoints[2]; //ioctl(llioInterface[EX_INTERFACE_MLC].fdDevice,LPIOC_GET_PROTOCOLS, // &twoints) <---- twoints is already an address... &(twoints[0])! ioctl(printerPort_FD, LPIOC_GET_PROTOCOLS(sizeof(int[2])), (void*)twoInts); onError("GET_PROTOCOLS"); printf("Current Protocol: %d\n" "Supported Protocols (Mask): 0x%x\n", twoInts[0], twoInts[1]); } int main(int argc, char* argv[]) { //ioctl requires a file-descriptor not a FILE*... // And we might want some of those other O_ options, as well. //FILE *printerPort; //printerPort = fopen("/dev/usb/lp0", "r+"); //int printerPort_FD = open("/dev/usb/lp0", O_RDWR); //Maybe O_NONBLOCK? if (printerPort_FD == -1) { if( onError("Open") ) return 1; //printf("Open: Error %d, '%s'\n", errno, strerror(errno)); //return 1; } #define STRING_LEN 1000 char ioctl_return[STRING_LEN] = { [0 ... (STRING_LEN-1)] = '\0'}; ioctl(printerPort_FD, LPIOC_GET_DEVICE_ID(STRING_LEN), (void*)ioctl_return); onError("DEVICE_ID"); printf("DEVICE_ID: '%s'\n", ioctl_return); getProtos(); if(argc > 1) { int newProto = atoi(argv[1]); printf("Per Request: Setting Protocol to %d\n", newProto); //man ioctl is a bit confusing... or this is implemented weird // don't send a pointer to newProto, send newProto itself. ioctl(printerPort_FD, LPIOC_SET_PROTOCOL, newProto); onError("SET_PROTOCOL"); getProtos(); } close(printerPort_FD); }
note that it has to be run 'sudo' on my system, despite having added myself to the 'lp' group... huh.
So, now you can set your usb-parallel dongle's port to whatever mode you like. SPP should be the easiest, allowing you to e.g.
"echo x > /dev/usb/lp0"
and see the binary value of 'x' displayed on those output-pins.
-
Oldish Sony Vaio Laptop + Debian Jessie + USB-booting when the BIOS doesnt support it
08/03/2016 at 12:09 • 3 commentsAlright! My first laptop in nearly 2 years! (I've been sitting at the ol' desktop since the old one suffered the GPU-deballing fate).
The cat *seems* to be quite happy about it... but now she's not. You know cats...
So, here we are... I finally have a laptop about the equivalent of (actually slightly lower-spec'd than) the one I invested a Hefty Fortune into over a decade ago, and expected to last me a good decade+ (but didn't due to stupidity).
So, here we are... A Laptop! 2006, no less! Movin' on up!
So, here's the deal... I'll make it real quick, then go into details, I guess.
Blacklist BOTH: 'sonypi' AND 'sony-laptop' modules.
OK, I can't recall *how* I determined this path, but this is the path I took:
First of all, the debian Jessie: 8.5.0 live-boot was flakey, to say the least. But I recalled the same from my desktop (and an older version of Jessie, as well as Wheezy?) The end-result being: *install* seems to work-ish, but live-boot is questionable.
So, that's the first bit.
Then, from there, booting seemed to stop after fsck ran... There was a weird message about an unhandled interrupt, but that wasn't the problem. Oh yeah "nobody cares" no joke, that's what it said.
Oh, wait, there was a step before that... USB-Booting... Nogo. And I've only got a handful of blank DVDs...
Most of those are now coasters. But before wasting *them all* I had a brilliant idea... Certainly someone has come up with a boot-disc/k that can load-up a menu for booting from e.g. USB even on systems which don't have BIOS support for it... right?
YEP!
I mean, I know that technology's progressing and all that jazz, but this "new" (to me) laptop is in damned-near pristine condition, despite being over a decade old... This thing's still useful, and if logic serves me, there must be a million others out there in equally useful condition (things that've been sitting in server-closets as little more than remote-desktop machines for those times) if people would just put some time into it... (two days, now... yeah, it'd be cheaper, hour-wise, to buy a new one... are you really "green"? 'cause the carbon-footprint of building a new system, no matter how much faster and more energy-efficient, *far* outweighs the carbon-footprint of a couple days of work and probably *all* the inefficiency of this CPU's remaining lifetime, running at 100% the whole time...)
Anyways, some people have actually thought about this, or are lazy, or cheap, or Luddites, or something. Thank God For Them.
YES: You *CAN* boot from a USB thumb-drive on a system which doesn't support booting from USB... https://www.plop.at/en/bootmanagers.html
So, burn ONE CD (or even a floppy) with that image, and quit wasting DVDs when you could be using thumb-drives. (BTW: my "thumb-drive" is my camera's SD-card and an adapter... Gotta erase this thing when I'm done to use my camera again. Heaven forbid).
Alllllright... where we at...?
Download that shizzle, and more importantly, if you find it useful and you make even $5 more than I make in a month (and I dang-near guarantee that's the case, in consideration of local cost-of-living, etc.) send that bloke $5 for all his hard work so the next version doesn't contain a virus...
Alright, where we at...?
OK, so this laptop... it's got some issues, it probably needs some resoldering somewhere, but in general, it's running. I have to pull the power/battery every so often before it'll boot. But I'm writing on it now, so obviously it's not *so* bad... Sooner or later, maybe I'll open it up and see if the BIOS-battery has died...Alright, where we at...?
Oh yeah, so, I finally booted Debian Jessie's (MATE! Not GNOME!) installer from the live-image (but wasn't able to get it to live-boot), and did the full installation... and upon the first boot was met with two messages.
A) Some sort of interrupt that "nobody cares" about.
B) fsck did its check, and it stopped there.
Dead-frozen... Even CTRL-ALT-DEL did nothing.
So, some searching (and, if you know me, then you know my search-fu has failed me for quite some time), and eventually came up with:
- At the GRUB menu type 'e' to edit the boot-parameters...
- Then remove 'quiet' from whatever line says it...
- Then boot, using those new parameters.
OK!
Now, some messages that might be useful...
- Something about ACPI trying to allocate memory-space that overlaps with other memory-space...
- Something about the Sony Vaio's Jogdial
That last one is where everything came to a halt... But I didn't know what to do about it, and did some more searching and somehow came to the conclusion:
- At the GRUB menu type 'e' to edit the boot-parameters...
- Then remove 'quiet' from whatever line says it...
- Then add 'acpi=off' to that same line
- Then boot, using those new parameters.
And, low-and-behold, the system booted.
Not happily, mind-you...
- Wifi requires a firmware binary that Debian doesn't provide... (so, no wifi)
- USB doesn't seem to work...
- That 'e' procedure has to happen *every* time you boot.
- The screen-resolution defaults to something *really low* and can't be changed.
BUT: I did log-in to MATE-DESKTOP... So that's a startThen...
OK, I need network-access, right...? So, get that firmware... no biggie, Debian's website provides it (it's just 'non-free'), and even explains the process, but, again, USB isn't working...
OK... This is a weird one... For some reason it made sense to me that maybe ACPI was responsible for configuring the USB ports, or something... So what could be farther from the BIOS's ACPI configuration than... another USB controller?
Yeahp, so I plugged in my PCMCIA USB adapter... and... that blasted thing was not only detected, but also detected my USB 'thumb-drive'. K-den. Guess ACPI doesn't fsck with cardbus... right-on.
OK, so now I've got USB and I loaded that wifi's firmware... and the wifi wasn't strong enough to connect to my (very remote) access-point... So, I started loading shizzle onto the thumb-drive from my desktop, until I realized, wait a minute, I can't explain it, but my friggin' "phone" has a much more sensitive wifi receiver/transmitter than this ol' laptop... So plug that blasted thing into my cardbus->USB card, and tether... and alright, now it's like I've got wifi.
OK... NOW... I can do what needs to be done without having to resort to... not reclining.
Alright... Where we at...?
'acpi=off' == boot -> X11 -> Mate Desktop -> Internizzle, etcetterizzle.
But, wait... why does 'acpi=off' do its thing...? I don't even remember, now... I thought there was a problem with the "Sony Jogdial" or something, right...?
OK, well, I guess... I dunno.Time to friggin' recompile the kernel.
I don't know *why* exactly, but maybe something I'll see in 'menuconfig' will remind me of something I saw and will stick out like a sore thumb... right?
Well, not quite, but pretty-durn-close.
(Oh, and in-so-doing, things'll get optimized quite a bit... who needs 386 or Cyrix6x86 support when you're running a Pentium++? etc....)
OK, so the Sony-Jogdial message is the last message that occurs before the entire system freezes (when "acpi=off" is *not* added)... so... with "acpi=off"...
- 'dmesg'
- CTRL-SHIFT-F
- 'Jogdial'
- Nada.
K-Den... Apparently "acpi=off" somehow disables the jogdial driver...
Some searching, some finding... A bit confusing... Looking for ways to disable that jogdial... Lots of things inbetween, including 'make menuconfig' for a new kernel (which I did NOT end up building/installing)... And, an observation within that process... In ONE location there's an option to make "sonypi" as a module... Right...? So blacklist "sonypi" and see what happens... and lo-and-behold that dang message *still* shows up... WTF? I blacklisted that module!
Alright, back to kernel configuration 'make menuconfig' (oh, BTW: apparently menuconfig requires 'ncurses-devel' which you can't apt-get... instead apt-get 'libncurses5-dev' if I recall correctly)...
Alright... still going through stuff... What's This?! "sony-laptop" is *way* outside the "branch" that "sonypi" was located in... BUT "sony-laptop" has a sub-option to *include* 'sonypi' within it... So, we have *TWO* kernel-modules loading sonypi... one directly, 'sonypi.ko', and one indirectly, 'sony-laptop.ko'.
So, suffice to say, you've gotta blacklist *BOTH* 'sonypi' AND 'sony-laptop' and you might luck-out like me and have a running system without actually having to recompile the kernel.
So, a little more detail... Apparently 'sonypi' has a document at: <kernel source>/Documents/laptops/sonypi.txt... and in that document, if you're looking at one of the newer versions (than the one I found online, originally) it says something along the lines of: some Sony Vaios use the jog-dial pins for *other* purposes... Apparently mine is one of 'em. I tried using 'sonypi.mask=0' but that was a nogo... (maybe sony-laptop.mask=0?). There may be more to this, in the future, but as it stands, just glad to have a running system that doesn't require 'e' at the grub menu every boot, doesn't limit the resolution to something awful, and a few other things...
This decade+ old system is gonna revolutionize my recent-computing-experience... I'mma kick up my feet, stretch my arm over the cat... who knows where this could go!
(but, seriously... 1280x800 is a *stupid* resolution for such a *huge* screen... I had *significantly* more pixels on my 12in PBG4 after my hack... https://sites.google.com/site/geekattempts/home-1/lvds-single-to-dual-converter This thing's *taller* AND *way wider*! I should have nearly double the resolution of my PBG4, after the hack, and instead I have something like 1/2... what were those lunatic-designers thinking?! Without multiple-desktops to switch-between, there's no way you can have more than a webpage open at a time! I mean, sure, if you've got a 10in screen it makes sense, but why make such a huge screen with such a low resolution...?! Mind=Boggled.)
Hey, if someone donates $10 to me, I'll give $5 of it to the "Plop" dude! Click that "sites.google.com" link, above!