Having bought a WS2811 RGB LED string and having some experience with WS2811 in LED strips. I'd like to experiment with using LED strings in Christmas trees or other "unorderly" arrangements and automatically detecting their position.
The use case is as follows:
User drapes string of some 50-200 LEDS in a Christmas tree
User takes video of the LEDs flashing in a systematic pattern
System processes video to register each LED's x,y (and later maybe z) position
System uses LEDs position information to send geometrically correct patterns to the LEDs
The system will use the following technologies
WS2811 LED strings
atmega328 microcontroller, may be replaced with an ESP8266 later
OpenCV for feature (LED) detection.
This video demonstrates the effect that I was hoping for: moving recognizable shapes with incredibly low resolution (50 "pixels" in all). Notice that if you pause the video above, the still image shows nothing resembling a circle. The resolution is simply too low to represent such a shape. Yet, as soon as the "shadow" starts moving its shape is definitely circular and you might even spot that the shape is somewhat elliptical.
There is a lot more going on than meets the eye: First of all all the positions of the randomly distributed LEDs have been automatically registered of course, because that is what this project is all about. Secondly, the edges of the shadow circle need to be fuzzy. The effect is as if the LEDs are partially occluded when the edge of the circle moves over them. If this isn't done, then the movement is a lot more pixelated and the shape isn't recognizable anymore. Thirdly: this works a lot better for a dark circle on a light background than the other way around. Apparently, the human brain is better at discerning shapes if they occlude light sources than they are at identifying shapes "made of light". Finally, there's a minimum speed at which the object can move and still have a recognizable shape. If it moves too slow, its not recognizable as a circle anymore.
All of this is teaching me a lot about the human visual system.
In order to get the result above, the original plan had to be changed in two ways:
Although the LEDs are in a random pattern, they weren't "draped". Instead I just drilled holes in a plywood board. This makes recognizing the LEDs an awful lot easier since now they're nicely spaced and there are no strange reflections
In the quest for working OpenCV registration of LED positions I'd already switched to a much simpler pattern, where the LED just light one-by-one. This had mixed results with the draped LEDs, but it worked from the start with the nicely spaced and separated LEDs-onna-board.
This is the pattern that was used to register LED positions:
The code that creates this pattern using a plain AVR is in the function simple_registration(). This pattern is then analyzed using the code in LedMapping.cpp. Finally, the output of LedMapping.cpp is used to hard-code the LED positions in a demonstration program.
The board as it is now will be placed under the awning of the front door, for Christmas only and I'm not so much interested in creating floating ghost shadows there (hmm. definitely an option for Halloween). I think I'll stick with the simple location-independent random starry sky pattern for the holidays:
Given that there is quite some hardware work to do (painting, getting the 5V power to the board outside) limiting the scope at this point should increase the chances of the project actually being finished before Christmas...
OK. Just so you know, this project isn't dead. It's just moving in its own pace, that's all...
There are some problems though. As I wrote in a previous log, the OpenCV algorithm used to detect the LEDs in an image has many parameters, and no set of parameters seemed right in all circumstances. A logical next step was to try an optimization algorithm such as Nelder-Mead to find the right arguments.
Nelder-Mead is a relatively simple method to find the maximum (or minimum) value of a function. It does not require the derivative of the function, which makes it applicable in a large number of cases. In essence, for a function that takes two inputs, applying Nelder-Mead consists of walking a triangle downslope across the solution landscape. The triangle may grow or shrink and it uses some tricks to jump out of local puddles. For more than 2 parameters, the triangle becomes a simplex, a figure that can be described by n+1 points in an n-dimensional space.
Any optimization needs a cost function with n inputs and one output, the cost. For a cost function, I chose the absolute difference between the actual number of LEDs and the detected number. I experimented with different inputs, zooming in on the following three parameters:
minimum distance between detected LEDs
minimun size of detected LEDs (area of detected feature)
maximum size of detected LED
I just needed to tweak the algorithm to only search in positive values, which is easy just square the proposed inputs before using them in opencv, and then: Let it rip!
Not.
With the settings as chosen above, the NM algorithm could not find any combination of parameters that detected any LEDs, it would just quickly shrink the "triangle" on a set of parameters that resulted in zero LEDs found.
The problem was that the parameter space consists largely of big areas where the algorithm can't find any LEDs at all. If all the points on the NM simplex have the same cost (in this case the maximum cost of finding zero LEDs), the algorithm will just shrink the simplex until it is so small that the algorithm concludes that there is no better solution to be found. So we have a large "flat" desert area where nothing is found, with a small valley somewhere, but the simplex just never walks close to the valley.
There are several ways around this. The first way is to first go hunting for the valley. This can be done for instance by sampling random points in the parameter space, or a grid of points, until a point is found with a lower cost than the other points. As soon as we have a simplex with at least 1 lower point, it will start to walk into that valley.
A second option is to artificially create random hills in the "flat" part of the space. This can be done by adding a random value to the cost when the cost is at its maximum. As a result, the simplex will go wandering around in the parameter space and it might just stumble upon the real valley. This is the option that I chose first, because it was easy to implement. This actually works, if the NM algorithm gets enough time (iterations) to walk around, but it means that my desktop PC needs in the order of seconds to find a suitable parameter set, which is long, considering I want the final algorithm to run on a phone.
A third option is to find an initial set of parameters which may be close to a best solution in most cases and use those as a starting point for NM. This requires some experimenting and tweaking and that is where I'm currently at.
Now that the LEDs are blinking, it's time to start working on the recognition software (ongoing experiments on GitHub). The plan is to use OpenCV, and more specifically OpenCVs SimpleBlobDetector to recognize the LEDs.
In a video, it should be easy to find the start of the pattern and the moments in time when the led patterns change.
Also, since there might be some slight movement in the video, the sequence should be as short as possible
For the same reason, it would be nice if LEDS are on as much as possible in order to track them across frames.
Below, you can see version 1 (well, version 3 actually) of the pattern. It's a simple binary sequence where 0 is encoded as red and 1 is encoded as blue (or the other way around, doesn't really matter)
No LED identification without LEDs, so the first step in this project has been to order an LED string. After a ~20 day wait (shipping to Europe seems to take the longest), they arrived. Time to start coding.
That's actually one of the most imaginative inventions that I've seen so far.