It's been a while since I've updated this project, and I have to admit my progress has slowed down a bit, but I'm far from done working on the 360 camera. I've spent most of my time recently pondering how to create a new camera that will be able to not only capture 360 video, but stitch it together in real time. This is obviously a massive increase in difficulty from the current state of the camera, where the FPGA really doesn't do much work, it just writes the images to the DDR3 and they are read by the ARM processor which stores them on a MicroSD card and then my PC does the stitching.
There are 3 main components that I need to figure out for real-time stitched 360 3D video.
- Cameras
- The current cameras I'm using are the 5MP OV5642. These are well-documented around the web and easy to use, but unfortunately they cannot output full resolution at above ~5fps, which is a necessity for video.
- The sensor that looks most promising for video right now is the AR0330. This is a 3MP 1/3" sensor that can output at 30fps in both MIPI formats and through a parallel interface that is the same as the one used by the OV5642. Open source drivers are available online and assembled camera modules can be purchased for $15 or so on AliExpress; I bought a couple to evaluate. These conveniently can be used with these ZIF HDD connectors and their corresponding breakout board.
- As shown in the graphics in my previous project log, to get 3D 360 video you need a minimum of 6 cameras with 180 degree lenses.
- Video Encoding
- Put simply, as far as I've found, there is currently no way for a hobbyist to compress 4K video on an FPGA. I would be more than happy to have anyone prove me wrong. Video encoder cores are available, but they are closed source and likely cost quite a bit to license. So, the video encoding will have to be offloaded until I become a SystemVerilog guru and get 2 years of free time to write an H.265 encoder core.
- The best/only reasonably priced portable video encoding solution I've found is the Nvidia Jetson TX2. It can encode two 4K video streams at 30fps, which should be enough to get high-quality 360 3D video. I was able to purchase the development kit for $300, which provides the TX2 module and a carrier PCB with connectors for a variety of inputs and outputs. Unfortunately it's physically pretty large, but for the price you can't beat the processing power. The 12-lane CSI-2 input looks like the most promising way to get video data to it if I can figure out how to create a MIPI transmitter core. I've successfully made a CSI-2 RX core so hopefully making a TX core isn't that much harder...
- Stitching
- The hardest component to figure out is stitching. My planned pipeline for this process will require one FPGA per camera, which will definitely break my target budget of $800, even excluding the Jetson.
- The steps to stitch are: debayer images, remap to spherical projection, convert to grayscale and downsample, perform block matching, generate displacement map, bilateral filter displacement map, upsample and convert displacement map to pixel remap coordinate matrix, remap w/displacement, and output.
- This needs to be done twice for each camera, once per each eye. With pipelining, each function needs to run twice within ~30ms.
- Currently the DE10-Nano looks like the only reasonably priced FPGA with (hopefully) enough logic elements, DDR3 bandwidth, and on-chip RAM. I'll almost certainly need to add a MiSTer SDRAM board to each DE10 to give enough random-access memory bandwidth for bilateral filtering. The biggest issue with the DE10-Nano is that it only has 3.3V GPIO, which is not compatible with LVDS or other high-speed signals, so it will be a challenge to figure out how to send video data quickly to the TX2 for encoding.
- The MAX 10 FPGA 10M50 Evaluation Kit has also caught my attention because of its built-in MIPI receiver and transmitter PHYs, but 200MHz 16-bit DDR2 is just not fast enough and it would probably not be possible to add external memory due to the limited number of GPIOs.
At this point, the video camera is 99% vaporware, but I've been making some progress lately on the stitching side so I figured it was time to make a project log. I've created a 16x16 block matching core that worked pretty well in simulation, as well as a fast debayering core and I'm working on memory-efficient remapping. I'll keep posting here for the near future but if/when the ball really gets rolling I will create a new project page and link to it from here. I plan to dip my toe into the water by trying first to build with 2 or 3 cameras, and if that works well, I'll move to full 360 like I did with the original camera. If anyone has any suggestions for components, relevant projects, or research to look at, I would love to hear from you.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.