-
Skype 3D and SLP Raspbian update
03/04/2019 at 11:31 • 0 commentsAs you know, all our Starter and Deluxe kits will include a microSD card with a ready-to-use Raspbian image so you can repeat all of our livestream experiments right out of the box. We’ve been busy polishing existing features and adding new ones to this image. In this update, we’ll share with you some new features and say a few words about our experiments with Skype and 3D video.
Latest StereoPi Livestream Playground (SLP) Image
- Image size reduced from 5 GB to 860 MB
- Video livestream to browser (2D and 3D)
- Livestream to Android over USB cable (Android accessory support)
- Bash console over web admin panel
- File editor over admin panel
- Access to video records over web admin panel
- RTSP livestream support
- MPEG-TS livestream support
- Linux partition now takes 2 GB instead of 4 GB
- FAT32 partition now created automatically on first boot
- RPi 3B+ and CM3+ support (updated kernel)
- Most settings are now in the
/boot/stereopi.config
file
You can download image file from one of these three mirrors:
Full descriptions of all features will be added to the SLP section of our wiki in the coming days.
Skype and 3D Video
One of our new features is the ability to livestream MPEG-TS. We used this feature to livestream video from StereoPi to OBS (Open Broadcaster Software) with this OBS-VirtualCam plugininstalled. OBS creates a virtual webcam accessible to Skype. Here’s a proof-of-concept demo, recorded by Sergey:
And here is a screen capture of my iPhone and our first 3D Skype call:
As the iOS screen recorder does not record sounds, I added music to it.
To get all these things to work, we used two tricks. First, we used mic from Logitech webcam connected to the same computer to provide the audio to Skype, since the OBS-VirtualCam cannot emulate a sound device, so sound from StereoPi’s microphone isn’t available to Skype.
Second, we avoid a one-second delay between audio (due to an internal OBS video buffer) by first streaming the video from StereoPi to gstreamer on Windows and then pointing OBS to use the gstreamer window as its video source, which resulted in a delay of only about 100 milliseconds.
This test shows it is possible to use stereoscopic livestream with a lot of common software, like Skype and other video-related programs, all without modification since they already work with a traditional camera.
If you want to discuss more features, please join this thread on the Raspberry Pi forum.
-
Wanna play with our Raspbian image?
02/07/2019 at 11:38 • 0 commentsIf you have classic Raspberry Pi with the camera, you can repeat all our video livestream experiments. Livestream to YouTube, Android and Oculus Go. Also you can repeat our behind-the-scene experiments with video livestream to WIndows desktop, Mac or any RTMP server.
Today we want to share with you our Raspbian image. We call it SLP (StereoPi Livestream Playground). It supports single-camera mode and also two-camera mode for StereoPi.
You can find image, Android application and brief manual in our Wiki -
Our crowdfunding is now live!
01/30/2019 at 20:04 • 0 commentsWe are pleased to announce the launch of the StereoPi campaign! :-)
-
Factory prorotyes passed all tests
12/25/2018 at 10:13 • 0 commentsAs we mentioned in our previous update, 3 weeks ago we started first step for preparing production at chosen factory. Now we are glad to inform you that first step is successfully complete!
During these 3 weeks these things happened:
- During first week factory started PCB manufacturing and begun components bought.
- During second week components was mounted on equipment, which will be used for batch production. At this step all components were mounted except some connectors. We received some photos at this stage:
---------- more ----------- On the second week all components needed for tests also arrived at factory (Raspberry Pi Compute Module 3 Lite and 2 cameras).
- For all 20 pieces factory performed all tests, and all 20 pieces have passed them successfully!
If you’re curious, for the tests our team created MicroSD Raspbian image with auto-started self-tests, and with every StereoPi factory do the next things:
- Insert MicroSD with test software
- Insert Raspberry Pi Compute Module 3 Lite
- Connect two cameras
- Connect Ethernet cable from StereoPi to router
- Connect HDMI monitor
- Connect USB device
- Turn power on
Script runs on start checks all things like camera modules, USB, Ethernet. If every subtest was successful script shows “green” result. So we obtained “green” for all 20 pieces.
- All these 20 pieces have been packed and sent to us. We plan to receive it on the next week. Here’s the photo of final sample and batch packed in a box:
The last thing we plan to do is to make aggressive tests in our lab. If aggressive tests will not show any things needed to be patched, then it means we have approved hardware design and ready to press “start” button for production right after successfully crowdfunding. We do our best now to start crowdfunding campaign in the nearest few weeks!
-
You from 3rd person view: StereoPi + Oculus Go
12/25/2018 at 10:08 • 0 commentsA friend of mine hosts a VR club and asked me if it’s possible to make a 3rd person view in a real life. Thus, we decided to conduct another experiment using our StereoPi (a stereoscopic camera with Raspberry Pi inside).
---------- more ----------My friend showed me some screenshots to clarify what it should look like:
After several days to compile mechanical parts and write some code for video livestream to Oculus Go, our team created a prototype of this 3rd person camera view.
Tests
I went to a friend’s office party on Friday and offered for his colleagues to take part in the first tests of this system. The results were really impressive! Here are some interesting moments:
What’s “under the hood”?
1. Electronics
We used StereoPi v 0.7 with Raspberry Pi Compute Module 3 Lite. For cameras, we chose Waveshare 160 degree cameras.
2. Mechanics
We 3D printed a simple case and created a laser-cut camera support plate:
To attach the camera to the person’s back, we created this construction from plastic tubes and colored it with liquid rubber:
3. Software
On the StereoPi side, we created a simple application to capture videos in stereoscopic mode with raspivid. This application also supports autodiscovery functions, to automatically find and connect to applications (currently ones on Android and Oculus Go). To provide for easy adjustment of settings, we created a simple admin panel, available over WiFi.
To stream video from the Oculus Go to the computer for observers, we used scrcpy-win64 and reconnected Oculus from a MicroUSB wire to a UDP connection. This allowed us to see the livestream on an external screen:
For the Oculus Go we used our Android application. It is not yet fully adjusted for Oculus, but it was enough for the first tests. This app uses the network to automatically find StereoPi, request access to its video, and begin livestreaming it to the user.
4. Some specific settings
To minimize latency, we set the camera to 42 FPS (maximum available on Raspberry Pi in stereoscopic mode without overclocking) with 1280x720 resolution. Bitrate was set to 3 Mbit. With these settings, latency was around 100 ms.
As I mentioned previously, we used two wide angle cameras. In this case, we cropped the left and right images to keep the aspect ratio that people are comfortable with. When using cameras on drones, we usually compress the images horizontally to maintain the original FOV; however, this time we planned to show it to people untrained in FPV flying.
There was no stereoscopic calibration or on-the-go rectification of stereoscopic video. We just tuned the cameras’ axes to be parallel as precise as possible and livestreamed it “as is”. We added this software to our to-do list for future additions.
5. Conclusion
The tests worked out very well. All of the testers left in a good mood and with new experiences. The first several seconds, it was best to support the users, to prevent them from falling until their perceptions adapted to the new reality
As for the hardware part — StereoPi met our expectations as a quick prototyping tool in this case as well. This, therefore, continues to prove its usefulness in such areas.
If you would like to know more about the developments in StereoPi production, and to take part in our upcoming crowdfunding campaign, you can subscribe to updates on our pre-launch page here: https://www.crowdsupply.com/virt2real/stereopi
-
ROS: a simple depth map using StereoPi
12/25/2018 at 09:55 • 2 commentsIf you use ROS when creating robots, then you probably know that it supports utilization of stereo cameras. For example, you can create a depth map of the visible field of view, or make a point cloud. I began to wonder how easy it would be to use our StereoPi, a stereo camera with Raspberry Pi inside, in ROS. Earlier, I’d tested and confirmed that a depth map is easily built using OpenCV; but I had never tried ROS - and so, I decided to conduct this new experiment, and document my process of looking for the solution.
---------- more ----------1. Does ROS for Raspberry Pi exist?
First, I decided to find out if it’s even possible to create a ROS for Raspberry Pi. The first thing that came up on a Google search was a list of instructions for installing various versions of ROS onto Raspberry Pi. This was great - I already had something to go off of! I well remembered how long it took to assemble OpenCV for Raspberry (about 8 hours), so I decided to look for ready-made versions of MicroSD cards to save me some time.
2. Are there any ready-made MicroSD cards with ROS for Raspberry?
Apparently, this issue has also already been solved by several teams of engineers. If you don’t count the one-time creations by enthusiasts, there were 2 types that were consistently renewed with new versions of OS and ROS.
The first type was an ROS installed onto the native OS Raspbian, from the team ROSbots.
Here’s a regularly updated link.
The second was images by Uniquity Robotics on Ubuntu.
And so, the second question was also quickly solved. It was time to dive deeper.
3. What’s the setup for working with a Raspberry Pi camera on ROS?
I decided to check which stereo cameras had ready-made drivers for ROS via this page: http://wiki.ros.org/Sensors
Here, I found 2 subsections:
2.3 3D Sensors (range finders & RGB-D cameras)
2.5 Cameras
It turned out that the first subsection listed not only stereo cameras, but also TOF sensors and scanning lidars - basically, everything that can immediately provide 3D information. The second was the one with the bulk of the stereo cameras. An attempt to look for drivers for several stereo cameras didn’t bring me any more joy, as it hinted at a gruelling amount of code.
Alright, I decided. Let’s take a step back. How does just one Raspberry Pi camera work in ROS?
Here, I was greeted by 3 pleasant surprises:
- Apparently, there exists a special node for ROS called a
raspicam_node
, specialized for work with Raspberry Pi. - The sources of this node are uploaded to github, and the code is regularly maintained and well documented: https://github.com/UbiquityRobotics/raspicam_node
- The creator of the node, Rohan Agrawal (@Rohbotics) works for a company which actively maintains one of the ready-made images for Raspberry Pi
I looked over the github repository
rasoicam_node
and checked the issues section. There, I discovered an open issue called “stereo mode”, almost 7 months of age, without any answers or comments. In this, more or less, the rest of the story unfolded.4. Hardcore or not?
To avoid asking the authors any childish questions, I decided to check the sources and see what an addition of stereo mode entails. I was mostly interested in this C++ subsection: https://github.com/UbiquityRobotics/raspicam_node/tree/kinetic/src
It turned out that the driver was coded at the MMAL level. I then remembered that the implementation code for stereoscopic mode was open-source, and readily available (you can find the implementation history here on Raspberry Pi forum); the task of coding a full stereoscopic driver for ROS was doable, but sizable. Furthermore, I looked at other stereoscopic cameras’ driver descriptions and found out that the driver needed not only to publish the left and right images, but also to implement separate calibration parameters for each camera and do a lot of other things. This stretched the experiments out to one or two months. After considering this, I decided not to put all my eggs in one basket: I would split up my efforts, asking the author about support for stereo, and meanwhile trying to find a simpler, but functional solution on my own.
5. Conversations with the author
In a github thread about stereo mode I asked the author a question, mentioning that stereo has been supported by Raspberry Pi since way back in 2014, as well as suggesting to send him a development board if it was needed for any experiments. Remember, at this point I still doubted that in this distribution kit the stereo would work as it did in its native Raspbian.
To my surprise, Rohan answered quickly, writing that their distribution uses a Raspbian kernel, and so everything should work fine. He asked me to test this on one of their builds.
A Raspbian kernel! Now I don’t have to sell my soul to capture a stereo image!
I downloaded and opened their latest version via a link from Rohan, and launched a simple python script for capturing stereoscopic images. It worked!
After this response, Rohan wrote that he’ll check the driver’s code for stereoscopic mode support, and asked several questions. For example, our stereo mode outputs one connected image, but we needed 2 halves - a left and a right. Another question was about the calibration of parameters for each camera.
I responded that for the initial stages, we could just download the images from the cameras individually. Of course, this would leave them unsynchronized in terms of image capture time and color/white balance settings, but for a first step it would serve the purpose just fine.
Rohan released a patch which permitted the user to select, in ROS, which of the cameras to pull images from. I tested this - the camera selection worked, which was already an excellent result.
6. Unexpected help
Suddenly, a comment from a user named Wezzoid appeared in the thread. He described his experiences in creating a project based on a stereoscopic setup with Pi Compute 3 and a Raspberry Pi devboard. His four-legged walking robot was able to track the location of an object in space, change the position of cameras, and stay a set distance away from him. Here's the Hackaday project itself by @Wes Freeman
He shared the code with which he was able to capture an image, cut it into two halves using python tools, and publish it as separate nodes of the left and right cameras. Python wasn’t the fastest in these situations, so he used a low resolution of 320x240, as well as a good lifehack. If we captured the stereo image side-by-side (one camera on the left on the image, and one on the right), python was forced to cut each of the 240 rows in half. However, if the image was stitched together in top-bottom format (the left camera on the top half of the image, and the right on the bottom), then python would cut it in half in just one operation - which Wezzoid had successfully accomplished.
Plus, he published the python code he used to execute this process on Pastebin.
7. Launching the publication of the left and right camera nodes
Upon first launch, the code proclaimed that it wasn’t able to access the YML files with the camera parameters. I was using the Raspberry cameras V2, and remembered that on github, in addition to raspicam_node, there were files with lists of the results of calibration for various camera models: https://github.com/UbiquityRobotics/raspicam_node/tree/kinetic/camera_info . I downloaded one of them, made two copies, and saved them as left.ymland right.yml after adding to them the camera resolutions from the author’s code. Here’s the code for the left camera as an example.
For the right camera, the camera name is changed to right, and the file is renamed right.yml; other than that, the file is identical.
Since I didn’t plan on creating a complex project, I didn’t recreate the author’s lengthy paths and subfolders, and instead simply placed the files in the home folder next to the python script. The code successfully started up, outputting status updates into the console.
All that remained was to check what the right and left cameras ended up publishing. To view this, I launched
rqt_image_view
. The selections /left/image_raw and /right/image_raw appeared in the drop-down menu; when I chose them, I was shown the separate feeds from the left and right cameras.Fantastic, this thing worked! Now it was time for the most interesting part.
8. Looking at the depth map
For the depth map, I didn’t attempt to create my own approach, and instead followed the basic ROS manual for setting up stereo parameters.
From that, I was able to figure out that it would be easiest to publish both nodes in a specific namespace, and not at the root as Wezzoid had done. After some tweaks, lines from the old code such as
left_img_pub = rospy.Publisher(‘left/image_raw’, Image, queue_size=1)
began to look more like this:
left_img_pub = rospy.Publisher(‘stereo/left/image_raw’, Image, queue_size=1)
We then launch the stereo processing node, stereo_image_proc:
ROS_NAMESPACE=stereo rosrun stereo_image_proc stereo_ige_proc
And of course we want to see the result, so we launch the viewer:
rosrun image_view stereo_view stereo:=/stereo image:=image_rect_color
Finally, to configure parameters of the depth map, we launch the configuration utility:
rosrun rqt_reconfigure rqt_reconfigure
In the end, we see the image embedded at the beginning of this article. Here’s a zoomed in screenshot:
I published all of the files for this program on github: https://github.com/realizator/StereoPi-ROS-depth-map-test
9. Plans for the future
After I published my results, Rohan wrote “Very cool! Looks like I am going to have to pick up a StereoPi”. I then mailed him the board. Hopefully, this will make it easier for him to tweak and debug this new acquisition into a full-fledged stereo driver for ROS and Raspberry.
10. Conclusion
It’s possible to create a depth map from a stereo image using ROS on StereoPi with Raspberry Pi Compute Module 3 inside, and in fact in several ways. The path we selected for quick testing isn’t the best in terms of performance, but can be used for basic application purposes. The beauty lies in its simplicity and ability to immediately begin experiments.
Oh, and fun fact: after I had already published my results, I noticed that Wezzoid, who had suggested the utilized solution, had actually been the author of the initial question about the publication of two stereo images. He asked it, and he himself resolved it!
- Apparently, there exists a special node for ROS called a
-
Stitching 360 panorama with StereoPi
12/25/2018 at 09:33 • 1 commentIn this article we will continue our experiments with the StereoPi stereoscopic camera based on the Raspberry Pi Compute Module. This time, we will create a 360 degree panoramic photo!
Click on image for online panorama view
Intro
In our last experiments, we installed cameras side-by-side with parallel axes and worked with stereoscopic effect. Today, we will use an inverted approach: cameras pointed in opposite directions, but equipped with wide-angle fisheye optics, each with a 200 degree field of view.
Let’s start at the end: here’s our resulting creation — basically, a panorama.
---------- more ----------Hardware
We have a StereoPi board…
…with Raspberry Pi Compute Module 3 Lite inside…
…and two wide-angle RPi (M) WaveShare cameras:
We then attach the cameras back-to-back:
Then, we capture a photo with each camera, and get these two pictures using raspistill:
Panorama stitching
Here, a question arose: how could we combine these two images into one with an equirectangular projection, such as is supported by almost all panorama viewing software?
After a long investigation, we found a 360-camera project, which was used as a reference for our future code.
However, to start stitching, we would have to start with a desktop panoramic application, to prepare a template for future transformations. For this, we used Hugin, which is open-source and can be downloaded here: http://hugin.sourceforge.net/download/
- So, we’ve downloaded, set up, and started up the software.
- Now, we need to choose “Simple” in the “Interface” menu.
3. Press “Load images…” and add two our files (21.jpg и 21–2.jpg).
4. Set “Lens type” to “Circular Fisheye”. “Focal Length” should be set to 1,2 mm, and “Focal length multiplier” to 7,6x. In the “Projection” tab, check that “Field of view” is set to 360x180 and “Equirectangular”. These are the default settings.
5. Next, press the button “2. Align…”. This begins the search for control points, which should find around 10–13 points. At this step, our panorama is already starting to look like a panorama.6. And now for the most important step — saving the project for later use in the automatic stitching of all consecutivepanoramas captured by our 2 fisheye cameras. To save, go to“File” -> “Save as…” -> filename “stereopi-template.pto”
7. Then, we go back to the “Assistant” tab (if we happened to go out of it) and press “3. Create panorama”. A new window appears, in which we set a height of 1944, to which the width adjusts automatically. The LDR format (resulting picture format) should then be set to JPEG. The image quality is at your discretion; the default for this is 90. This time around, we don’t tinker with“Corrections” and instead press “Save”.
8. After this, several windows appear, one of which will show the progress log.
9. In the end, we get something that looks like this:
Ok, so all of this is great- but do we really want to bother with each captured photo?
To up our efficiency, we need to make this process automatic. For this, we will need to use our project file, saved at step 6. We take this file (in our case it’s called stereopi-template.pto) and copy it over to StereoPi.
We also copy our script, stereopi-stich.sh. This script needs us to input parameters — namely, the file names of the fisheye photos to be stitched to equirectangular projection.
But first, we need to install all the required software for StereoPi. Simply download this script on StereoPi and run it: installer.sh
Now let’s run the stitching script:
# ./stereopi-stich.sh 21.jpg 21–2.jpg Stiching files 21.jpg and 21–2.jpg Generating pto file… Reading /opt/Pano/test1/21–2.jpg… Reading /opt/Pano/test1/21.jpg… Assigned 1 lenses. Written output to /opt/Pano/test1/tmp/project.pto Written output to ./tmp/project.pto number of cmdline args: 1 ================================== Stitching panorama ================================== nona -z LZW -r ldr -m TIFF_m -o 21_21–2-pano -i 0 ./tmp/project.pto nona -z LZW -r ldr -m TIFF_m -o 21_21–2-pano -i 1 ./tmp/project.pto checkpto — generate-argfile= project.pto_21_21–2-pano.arg ./tmp/project.pto enblend — compression=90 -w -f2688x1344 -o 21_21–2-pano.jpg — 21_21–2-pano0000.tif 21_21–2-pano0001.tif enblend: info: loading next image: 21_21–2-pano0000.tif 1/1 enblend: info: loading next image: 21_21–2-pano0001.tif 1/1 enblend: info: writing final output Bogus input colorspace exiftool -overwrite_original_in_place -TagsFromFile /opt/Pano/test1/21–2.jpg -WhitePoint -ColorSpace -@ /usr/share/hugin/data/hugin_exiftool_copy.arg -@ project.pto_21_21–2-pano.arg 21_21–2-pano.jpg 1 image files updated ================================== Remove temporary files ================================== rm project.pto_21_21–2-pano.arg 21_21–2-pano0000.tif 21_21–2-pano0001.tif
The process takes about 50 seconds. It’d be good to look for some optimizations for it in the future, but right now it’s good enough for testing purposes
This file is the result:
Voila! We’ve automatically stitched two fisheye images into one equirectangular file! Now, we can use this script for stitching all future files. The key point is to avoid changing relative camera positions — otherwise, the stitching will be of lower quality.
Now, we just need to insert it somewhere on our web-page. A panorama player is required to view the panorama..
We think the best embedding player for panoramas is KRPano. It supports both photos and videos. However, a paid license is required for usage. We bought a license about 5 years ago, but at that time it was based on Adobe Flash, and didn’t support HTML5. Now it can, so we plan to buy new license. https://krpano.com/
You can download my archive with all you need to obtain this result. The funniest view mode is “Little planet” (to change the view mode, just click the right mouse button).
We hope that our experiment will not only be interesting for you, but will also be useful as a step-by-step manual. Thank you for your attention!
p.s. Have you already subscribed to our crowdfunding news?..
Useful links: