As I'm born with this : https://en.wikipedia.org/wiki/Diplopia , I have never seen the world in 3D like most people do. Everyhting is like two superimposed postcards with some offset. In my country, it's forbidden to drive with such an issue, even if I never run into doors, people, or miss my mouth when eating ;-) In my case, no treatment, glasses or surgery could have solved it, for some reason, and I thought that was it for the rest of my life. This project was at first aimed at suppressing the offset between the two images I see. Extra bonus: I can see in 3D when the two images are perfectly re-aligned!So i thought I'd improve the project, and share my findings.
Components
1×
VR glasses
Like google cardboard or gear VR
1×
Smartphone
Has to fit in the VR glasses. The better resolution the less "door effect" you'll see.
I wanted a headset with usb connection to the smartphone. It also appears my smartphone almost fits in the samsung gear VR, so I went for it. I found one second hand for around 20$. I had to trim the corners and it worked. Then I had to replace all the inside uncompatible and useless (to me) electronics with a micro usb cable.
Once I have put everything together , it looks like this.
The smartphone sits on the same usb hub as the raspberry pi on the right, all connected to the left one, which acts as the server for everything.
I am using this streaming tool to access the raspberry pi camera with a low lag : https://github.com/hurdaskellir/h264-live-player . It creates a webpage where you can see the live video. Just run Chrome on the phone and access the webpage on the raspberry pi. You have to find it's IP on the usb ethernet network at first, by using ssh or another method, and that's it. I measured the delay to be lower than 250ms by taking a picture of both a stopwatch and the smartphone displaying a video of it. Let's hope I don't get sea sick when I use this in front of my eyes... ;-) The phone I have is a sony xperia z5 premium, with a 4K display, the resolution should be nice when watched through the VR headset.
I am trying to use everything I already have on my shelves to keep the cost as low as possible, and because I do not want to wait for spare parts I would have to order on some asian country, with delivery delays of around a month. Here is the idea : I want to use two raspberry pi zero linked together by a usb cable, one being a usb gadget connected to the other. The video streaming between both will be entirely managed by usb virtual network interfaces and routed through iptables. The usb hub will also host an android smartphone using its usb tethering mode to use the same network and be able to watch the images streamed by both raspberry pis.
Interesting project! How you are going to transmit the video to the smartphone? Maybe after you find the right image shift offset, you can get a prism with the right parameters and make a eyeglasses from it.
Thank you :-) At the moment, I am playing with WebGL and the game pad to interact with the web browser on the smartphone to align the two images according to the eyesight. The picture I posted shows a Gear VR with two Raspberry Pi Zeros with PiCams. The two Zeros are configured as RNDIS USB devices on my computer, and everything is displayed on a web page... It lags and is not very efficient. I haven't tried to align anything physically yet as it was only a first proof of concept. It will require some more precise adujstments. Maybe I will in the end just use a Rpi 3 with a 5.5 display (4K is possible?) and two USB webcams in order for everything to be fluid and real time.
Interesting project! How you are going to transmit the video to the smartphone? Maybe after you find the right image shift offset, you can get a prism with the right parameters and make a eyeglasses from it.