-
Well, we have v2 now...
11/09/2020 at 11:33 • 0 commentsThanks to your support our previous board, the StereoPi v1, was brought to life. Today we are thrilled to introduce an updated version!
Thanks to the newest Compute Module 4, we have a bunch of new features to introduce.
Here is our StereoPi v2 project page on Hackaday.
-
How to Synchronize Multiple StereoPi?
10/12/2020 at 20:11 • 0 comments👉 WHY?
You may ask, "Why do I need to use multiple StereoPi at once?"
Well, we got a set of requests from some customers who needed to take a lot of images at once. One of the most popular use cases is the creation of animated "3D" GIFs. These images are created using a set of images (usually 4). In our article we are describing all processes, starting from capturing files and up to creating MP4 video files. So you can post it as a video, or use a video->gif converter to get an animated image.The second popular use case is a 3D reconstruction, using a set of cameras. For example, through the use of Multi-View 3D Digital Image Correlation (Multi-DIC), a technique recently published by Dr Dana Solav and Dr Kevin Moerman.
👉 HOW?
If you are interested in this synchronization approach, you can find our detailed how-to guide here on our blog. -
StereoPi used as a USB device
03/24/2020 at 10:58 • 0 commentsThe Raspberry Pi Compute Module powering the StereoPi has the ability to operate as an USB device thanks to an USB-OTG hardware within the processor. This means that we can connect it to the PC and make it appear as an USB stick, a serial interface, or – as exposed in this article – as an external network interface. It is very powerful but it comes with a few limitations due to the architecture of the Raspberry ecosystem followed by the StereoPi. Let’s see how it works.
You can read full guide here:
-
OpenCV: comparing the speed of C++ and Python code
02/15/2020 at 12:13 • 0 commentsYou often hear that Python is too slow for computer vision, especially when it comes to single-board computers like Raspberry Pi. In this article, we decided to measure the actual speed difference between C++ and Python code, and find the performance 'bottleneck'.
You can find full article here in our blog: https://stereopi.com/blog/opencv-comparing-speed-c-and-python-code-raspberry-pi-stereo-vision
-
AnyCase Kit
12/19/2019 at 16:03 • 0 commentsWe've developed this accessories kit for the very quick case prototyping. You can assemble your setup in 10 minutes!
5 mounting plates for your cameras and your StereoPi
You can do a stereoscopic setup with 25 mm, 65 mm, 120 mm or 200 mm stereobase. Also 360 degree photo/video setup is available!
M2 nylon spacers kit
With this M2 kit and a screwdriver you can assemble your setup for just 10 minutes! JFYI, both StereoPi and Raspberry Pi cameras have M2 mounting holes!
3 types of Raspberry Pi camera ribbons included
Sometimes you need to bend a camera ribbons for your custom setup. Don't limit yourself with a fear to damage your ribbon. We've added 12 camera ribbons:
- 4 pcs of 5 cm camera ribbons
- 4 pcs of 10 cm camera ribbons
- 4 pcs of 15 cm camera ribbons
With the AnyCase kit your StereoPi quick start will be really quick!
-
DIY VR headset as a StereoPi Companion
11/12/2019 at 16:55 • 0 commentsOne of our friends does psychological research in the field of human perception. He was impressed by our experiment with Oculus Go, and wanted to use this solution in his research. But he had one very strict requirement: the video latency should not exceed 10 ms.
Here is our TL;DR step-by-step guide for creating this solution.
-
A robot on StereoPi, part 1: fisheye cameras
08/04/2019 at 20:03 • 0 commentsThe goal of this series of articles is to create a compact indoor robot that can navigate using stereo vision. As a platform for it, we’ll use a small Dagu RP5 platform on tracks that we have. Here’s how it looks like next to the StereoPi.
Detailed TL;DR for fisheye cameras calibration can be found here in our blog.
-
1.44 TFT Raspberry Pi HAT screen test
06/21/2019 at 13:07 • 0 commentsI've got one of these tiny funny screens.
So following this manual with this fix I got this result:It just works! :-)
-
3 more DIY guides for the StereoPi
06/20/2019 at 13:58 • 0 commentsWe have 3 more guides now:
1. The Art Of Stereoscopic Photo, part 1 (basics)
2. The Art Of Stereoscopic Photo, part 2 (assembling a camera)
3. Hacking Skybox on Oculus Go for StereoPi live streaming (just a hack)
-
OpenCV and Depth Map on StereoPi tutorial
04/09/2019 at 08:33 • 0 commentsToday we’re pleased to share with you a series of Python examples for OpenCV development. This code works with either the StereoPi or the Raspberry Pi Development Board, as both support using two cameras simultaneously. Our ready-to-use code (and also Raspbian image) will help you every step of the way, from the first image capture to the Depth Map created via real-time video capture.
Introduction
We would like to emphasize that all of these examples are for those new to OpenCV and are not intended for production use. If you are an advanced OpenCV user and have worked with the Raspberry Pi before, you’ll know it’s better to use C/C++ (instead of Python) and to utilize the GPU for better performance. At the end of this article we’ll provide some notes regarding the various bottlenecks we experienced using Python.
Hardware setup
Here is our hardware setup:
We used the StereoPi board with Raspberry Pi Compute Module 3+. Also two Raspberry Pi cameras V1 connected (based on ov5647 sensor).
Software used:
- Raspbian Stretch (kernel 4.14.98-v7+)
- Python 3.5.3
- OpenCV 3.4.4 (pre-compiled, ‘pip’ from Python Wheels)
- Picamera 1.13
- StereoVision lib 1.0.3 (https://github.com/erget/StereoVision)
The software installation process is beyond the scope of this article but we have prepared a Raspbian image with all software installed. Here is a link to our GitHub stereopi-tutorial repository.
Notice
All scripts support key stroke processing, and you can press ‘Q’ key to stop them. If you use Ctrl+C to stop the script, it may break the Python interaction with the cameras. In this case, you will need to reboot StereoPi.
Step 1: Image Capture
We use 1_test.py script for this purpose. Open the console and go to our examples folder:cd stereopi-tutorial
Console Command:
python 1_test.py
After starting the script you can see a preview window with the stereoscopic video. Pressing ‘Q’ will stop the process and save the last captured image. This image will be used in the next scripts for Depth Map parameters tuning.
This script allows you to check if your hardware is operational and helps you obtain your first stereoscopic picture.
The following video shows how the first script works:
Step 2: Collecting Images For calibration
In an ideal world, a perfect depth map needs to use two identical cameras with their optical, vertical and horizontal axis all in parallel. In the real world, however, cameras are different and it’s impossible to align them perfectly. Thus, a software calibration method is used. Using two cameras you take multiple photos of an object. In our case, we used a printed chessboard. A special algorithm will then analyze these photos and find parameters for correction. This script begins this process by capturing a series of chessboard photos for calibration. Before each photo, the script starts a five (5) second countdown. Five seconds is generally enough time to reposition the chessboard. Make sure it can be seen by both cameras and ensure it’s stable to avoid “blurred” photos. The default number of photos captured per series is 30.
Console Command:python 2_chess_cycle.py
The process:
At the end, we have 30 stereoscopic photos, saved in /scenes folder.
Step 3: Image Separation
The third script 3_pairs_cut.py separates the captured photos into “left” and “right” images and saves them in /pairs folder. These separations could be done on-the-fly, without saving, but this step is helpful for your next experiments. You can save image-pairs from different capture series. Use your own code to work with this images, or use another stereoscopic camera’s images by putting them in this folder.This script will show you every stereo pair before it’s separated (and waiting for key press). This lets you find bad photos and remove them before the next script.
Console Command:python 3_pairs_cut.py
Short video:
Our code also includes the images we used for calibration. You may save them as an example before taking your own stereoscopic photos.
Step 4: Calibration
The next script 4_calibration.py loads all pairs saved on the previous step and calculates correction matrices. The script first tries to find a chessboard on the photo and, if it can’t, then it ignores the current pair and continues forward. So if you have some bad photos in your series, it won’t break the script. After all the calculations have been done, the program will rectify the last image and show you resulting “fixed” images. At this step, you can understand the quality of the calibration. In our case, calculations took about a minute and a half.
Console Command:python 4_calibration.py
Calibration script doing his job:
Step 5: Depth Map Tuning
The next script, 5_dm_tune.py, loads image saved in script 1, and the calibration results from step 4. After that it presents you with a depth map and an interface for fine-tuning. If you want to use our parameters, just press the “Load settings” button. Before tuning the parameters, we recommend taking a photo with 3 objects at different distances from camera. For example one close-up at 30–40 cm, another at average distance (2–3 meters) and another “far” away. It will help you to find the right settings, where the closest objects will be red, and those far away will be dark-blue.Console Command:
python 5_dm_tune.py
Here is how this looks like:Step 6: Real Time Depth Map Using Video
Last script 6_dm_video.py builds depth map in a real time, using results from previous steps.
Console Command:python 6_dm_video.py
The result:
We hope our scripts will help you in your experiments!
Notes for advanced users:
- Once started, the first script will display the average time between captured frames and, once stopped, it will show the average FPS. This can be used as a simple tool to determine the camera and capture parameters at which Python can capture imagery without dropping frames. Using this script, we’ve found that setting the camera to 1280x480 at 20 FPS provided stable results, with no lost frames.
- You may have noticed that we capture images from camera at 1280x480 resolution and then scale it down to 640x240. You may ask, “why don’t we just capture 640x240, and avoid the Python processing to scale them?” The reason is that PiCamera has some bugs related to capturing images at low resolutions, which results in the image appearing broken (photo mixed with green squares). So, to avoid this, we use a method of acquiring a large image and then scaling it down via the GPU. In this way, there is no additional load to our ARM CPU!
- Why do we use BGRA capture, but not BGR? As we mentioned before, we resize the images using the GPU, and a native GPU format is BGRA (or RGBA, as you wish). If you use BGR you will find two issues; the first is a lower FPS (about 10–20% in our case) and the second is a PiCamera warning «PiCameraAlfaStripping: using alpha-stripping to convert to non-alpha format; you may find equivalent alpha format faster”. Googling this error led us to PiCamera documentation where we found the BGRA method.
- Where is PiRGBArray in your code? This is a native class of PiCamera for working with cameras. In our experiments, using a hand-made Numpy array instead of PiRGBArray gives us about 50% more FPS. We don’t think PiRGBArray is a poor choice and it may be we made some mistakes in implementation. We’ll look into this further at a later time.
- What is the CPU load while building the Depth Map with video?
Let me answer this with a screenshot:
We can see that only 1 kernel from 4 of the Pi’s CPUs has some load — even while rendering the GUI, 2 images and a depth map. This means that there is potential for performance here, and can try to use it by OpenCV optimization (OpenMP) using C instead of Python.
- What is the maximal Depth Map FPS we can have with these settings? We were able to achieve ~17 FPS, while the camera captures 20 FPS. FPS appears to be most sensitive for a number of parameters. MinDisparity and NumOfDisparities are obvious as these define the number of “steps” for running a window while comparing two images. Another parameter appears to be preFilterCap.
- What is the CPU temperature?
Using a Compute Module 3+ Lite, we’re seeing a temperature of 63.4’C
- How can we use GPU?
It can be used for undistortion and rectification in a real time. There are some similar approaches, for example using WebGL, or Python-based Pi3d, and also Processing for Raspberry Pi. There is also an interesting project called py-videocore by Koichi Nakamura. In our communications Koichi told me to use his kernel and StereoBM sources for CUDA to accelerate the depth map. So, there are lot of optimization options.
Thank you for reading!
Here is our source code with the Raspbian image links: