I wasn't very confident that the RPI 1 could do vision processing, however, with lots of head bashing, cinder block pictures, and lots of training, I made it work.. but it was slow:
The demo above uses the OpenCV and haar cascade libraries to detect cinder blocks (one of the obstacles in the game). It averaged 6-8fps depending on how many cinder blocks it was able to detect. A week after I got this demo working the RPI 2 was announced.. but quickly sold out. Thankfully my good friend Upu had a spare and donated one half way around the world to our team. After hours of compiling OpenCV and figuring out the mess of dependencies again I was able to run the same code, this time compiled for the ARMv7. The result was stunning. Without any objects in the frame, I was averaging 30fps. Once I introduced a cinder block or two the fps dropped to about 20-25fps (note this is all single threaded, the RPI 2 hovered at 25% cpu use).
There's still a lot of work to do, optimization, better training data (100 positive and negative images isn't enough). Even so, I'm beginning to realize that perhaps haar cascades aren't the best way to detect objects in the field. The game field is going to be on grass, so it maybe a better idea to just look for blobs that aren't green...
More updates to come..
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.