Working with the Ocolus Rift is a bit more complex than a google cardboard solution.
The google cardboard is a simple concept. (https://vr.google.com/cardboard/)
The mobil phone displays separate picture to each eye. In our robot we display the two camera picture for each eye. This makes the 3D picture. For the phone we used a simple browser page which shows the the left camera stream on the left side, and the right camera stream on the right side of the display. For the motion tracking I used the HTML5 device orientation events (https://w3c.github.io/deviceorientation/spec-source-orientation.html)
The concept behind the Oculus Rift is the same, it's displaying separate picture for each eye. But there are lots of other important differences. The Oculus Rift is designed to be used for computer generated VR.
My first idea was to display the image from the same browser page what I used in the mobil VR,
I never worked on the Oculus Rfit before, so when I started, I quickly realized this isn't another computer monitor to drag and drop windows application on to it.
So I started to search solutions to display streaming video on the headset.
In the first experiments I used the web approach. I tried out the webVR. I made some progress with it.
I programmed the head movement tracking easily, and send it to the robot with the same websocket solution what I used for the mobil VR. But I can't display the stream in the headset. I used the A-frame with webVR. I can display 3D objects, images, even recorded mp4 video, but not the stream.
The robot is using mjpeg streamer to send the video over WIFI. It's not supported with the A-frame yet.
I continued the search, and tried out a more native approach.
My next experiment was the Unreal Engine. I chose it because in the end I like to have multiple VR headset support. The Unreal Engine is compatible with lots of headsets (like webVR), and I have a some experience with it.
I started the programming with a new socket connection to the robot, because it couldn't connect to the websocket. In the Unreal Engine there is socket library, but I couldn't make a TCP client with it. So I made an UDP client in the Engine, and the UDP server in the robot. (In the basic concept, the robot is the server.)
The next thing is the image. It was a bit tricky to display the stream in the engine. I used the experimental "ingame" browser widget. But the widget isn't optimized for VR displaying. My next idea was to display the browser in front of the player character, and lock it to the view. It worked, but It automatically converted for the VR headset display. If you look through the headset you see the two images with one eye. Basically it's showing a floating browser in front of the user with the two camera stream.
I wanted to demonstrate it on a meetup, so I changed the side by side images to one camera picture, and showed that on the virtual browser in the unreal engine, with head tracking. For now it can show 2D video with head tracking. It's an interesting experience, but not what I was looking for.
Now I'm thinking about if I can't make it work with Unreal Engine soon I will try out the native approach. Maybe with the Oculus Rift SDK I can display the two stream on the headset easily, and send the head tracking data to the robot.
At the meetup we had issues with the WIFI connection again, like at the competition.
We have to solve this once and for all before another public presentation. On the meetup we experienced high latency, and disconnects because of the poor connection.
To solve this we will install an 5Ghz WIFI AP, or maybe a router on the robot.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.