-
Sensor Smoothing
08/26/2018 at 19:49 • 0 commentsLet's talk about how the program works to gather the heading information for later use to generate colors.
First off, the hard part was already taken care of by the kind people at Adafruit. The compass I used was their LSM303 breakout board. This device has a magnetometer to find heading, plus an accelerometer to find out which way is down (or whatever else you want to use them for). This is important because it allows us (really the Adafruit library I downloaded) to compensate for tilting of the device as it figures out where it is pointed.
By the way, there is no shame in using pre-made libraries. They are great for prototyping, and when you find yourself limited by them, you can then go program your own. There's usually no need to optimize to that level unless you are building something for production in the millions, where moving down to a thirty-cent cheaper microcontroller might save you big bucks.
OK, so given that we have read through the documentation and set up our Arduino to generate compass heading data from the magnetometer, that information can be pretty jittery. One relatively easy (if a little processor-intensive) way of doing this is with something called an infinite impulse response, or IIR, filter. This particular one is an exponential smoothing filter. In short, with this kind of filter, your new "answer" is your last answer and your new input data, blended with some sort of formula. You can imagine that, with each new data point, you still have some factor leftover from old data. That's more or less the "infinite" in IIR.
Here's how it looks in the code:
// Load up new heading components with smoothed values. newHeadingX = ALPHA*event.magnetic.x + (1-ALPHA)*oldHeadingX; newHeadingY = -ALPHA*event.magnetic.y + (1-ALPHA)*oldHeadingY; // Store the new headings under the old headings for future smoothing. oldHeadingX = newHeadingX; oldHeadingY = newHeadingY;
You can see there that I'm generating X and Y components of the compass heading to convert to an angle later. I smooth each component as it comes in with the formula seen in the second and third line. ALPHA is a number between zero and one defined elsewhere that serves as our filter constant (a larger number here is less filtering).
Since ALPHA is a decimal, all these numbers are floats. We could do some integer math here -- basically like considering ALPHA as a percentage, or like talking about cents instead of fractions of a dollar -- but I'm not doing much else with the Arduino in this project, so floating point math is just fine for now.
That said, I hope to refactor this project to use the FastLED library to generate nicer colors, and FastLED uses the byte as its main data type, so I could likely write all or part of the program using integer math. I'll have to think on that one...
-
"I mayd a jiff," or "On photographing RGB LEDs"
08/26/2018 at 19:46 • 0 commentsI made a GIF for fun:
To make this not a complete bold-faced play to get my four log entries to stay in the contest, I'll mention how I did it. There's actually a little bit to it, even though it didn't come out all that great with those wild reflections and blotchy colors.
Knowing that I might want to make an animated GIF some day, I took a bunch of pictures of the project, each one after rotating it a little, stop-motion movie style. Of course you need to keep the camera steady, so I put it on a tripod. But here's the fun part:
Put your camera in manual mode
If you leave a normal camera on full automatic mode, it will adjust the exposure based on how much light it thinks is hitting the sensor. It can be wrong, or your ambient conditions can change, and it might expose the entire shot differently from one frame to the next, resulting in notable dim or bright frames. You can see a similar effect in the image above, but this is instead an artifact of the GIF encoding. If you have a fancy app or a camera with full manual control, set it to a good film speed, aperture, and shutter speed and don't change it. This will ensure that everything looks about the same from frame to frame, especially the background, even if your camera's sensor is confused by the varying light coming off of the LEDs.
Underexpose most of the shot
Here's another thing you need to keep in mind: LEDs, even when diffused through the head of a vinyl toy, are very bright when viewed directly. Combine this with color, and you can get some weird effects not unlike the patchiness you see in the GIF above. Again, this particular case is because of the GIF encoding, but a similar effect happens if you let your camera's image sensor get overexposed. So when capturing the source images, I dialed the exposure down pretty far. That's why it looks dark in the background. It was in fact fairly bright in the room, but I was trying to limit the amount by which the direct RGB LED light would overexpose in the toy's head.
You can also get fancy and use a flash to brighten the scene so it balances better against the LEDs. But that's probably beyond the scope of this little article. Plus I didn't use it here, so I don't have anything to show for it.
Shut off white balance
Now, this last part is probably the most important thing to keep in mind when photographing a subject like this. Your camera, even in manual exposure mode, probably does automatic white balance. Here's what that means:
When we look at a real, live scene with our eyes, we process it in such a way that colors look more or less the same always. That is, your blue shirt looks to be about the same blue whether you are in your house with warm white incandescent bulbs or in full sun in the middle of the day. You may be aware of slight changes in color cast, but we know that the shirt didn't change color, so we perceive it as the same color.
However, the light from the mid-day sun is definitely different than the light from a light bulb. And it's not just brighter, but its color content is different. So the actual photons coming off your shirt have a different color content depending on the incident light.
When we look at a photo, however, we become so much more aware of the color cast imparted by the incident light. In order to make that photo look more like how we would see the scene in real life, most cameras have what is called white balance. They "observe" the scene and try to figure out what the actual colors are behind the reflected colors we see. In the most basic white balance algorithms, the system just slides the overall color cast one way or another until the average of the whole image is neutral. That works in a lot of cases, but when you are taking a picture of an RGB LED, the system may try to average out the scene even though you have a single spot of color and the rest is neutral. That can cause your LED's color to become very dull and the rest of the image a weird shade opposite your LED's color.
This is why your sunset shots often need to be enhanced. The camera sees orange and thinks it should be seeing neutral gray. So it forces everything to be, on average, neutral gray.
I should note that cameras and phones are getting too smart for this these days and can recognize a sunset and set the white balance appropriately. But you get my gist.
Gifsicle
Once you've captured a bunch of images, convert them all to GIFs. I just used a random drag-and-drop batch image converter program, but I bet ImageMagick has an easy way to do it. Once you have a folder full of GIFs, a command line program called Gifsicle is handy to put them together into an animation. I forced mine to use a limited 256 color palette to ensure compatibility, leading to the splotchiness you saw in the animation at the top. I didn't really test the image on other devices, though, so it may have been fine to let it use the original palette.
-
Details Updated - What's Next?
08/23/2018 at 17:14 • 0 commentsI finally scrounged up time to update the project details with just about everything I have written about it in my notes and elsewhere. So you can read that if you have more than a few minutes to waste. It's pretty long, with a lot of poetic waxing.
What this also means is that I can, hopefully, get started on implementing improved color rendition before the contest is over. Just a short few days, but maybe I can do it. I'll keep you all informed as I go.
First, Hackaday actually had an article about two months ago about using HSV for nice fades. That's basically what I'm already doing here since I started at HSV, given that the whole idea is that we're talking about color and heading both as angles around a circle.
Now, where it gets more interesting is in this Hackaday article where we learn about gamma. In short, our eyes are more sensitive to changes when the LED is dim versus when the LED is bright. That is more or less easy to account for, at least if your microcontroller can do the math (just one math -- the math, in fact). But it also turns out that the function that would account for this peculiarity of human vision would need to be a little different for each of the three colors in our RGB LED. This can be noticeable when we're dealing with colors, especially when we want them to represent something specific, like compass heading.
So to not bog down my eight-bit Arduino to a crawl (I already have it -- unnecessarily -- running floating point math to smooth things out), I might just do this as a lookup table. It seems like a cop out, but as far as I can tell it's the way it's done in the real world because it's efficient and effective.
-
First Post
08/16/2018 at 02:03 • 0 commentsI've been putting up a little bit of stuff here to get the project page filled out, so now I guess it's time to talk about it.
Some time ago I was on a kick tossing around ideas for using color to communicate information. At around the same time, I got a few Kidrobot Munnys and was trying to figure out what to do with them. (The one in this project is a Raffy, for all you pedants out there.) Also at the same time, more or less, I read an article or two about how pigeons might see magnetic fields. And last but not least, I remember thinking about white balance on cameras and the way our minds seem to use color information to tell what time it is (or really where the sun is).
So, I managed to somehow squeeze all that into one project.
And that's the history, pretty much. Now that we're up to speed, my next posts will be about this specific project, what we're doing now, and where we'll go next.