How to make a stereogram video
First, let’s start with a test. What do you see here (switch to 480p!):
Well, it’s not a broken antenna on my TV. Probably half of you are able to see the hidden part of it, for every one else it’s just noise. What’s the trick? Simple: Do you remember those random dot images which were quite popular in the 90ies where you had to (more or less) squint to see the 3D content? That’s exactly what you are seeing in the video! It’s me in 3D moving my hands and a sheet of paper around.
Last week I stumbled over these 3D images, called stereograms, and was curious if this would also work as a 3D video instead of a still image. I wasn’t sure if the eye could follow the 3D impression since the random pattern would change from frame to frame completely. But it works surprisingly well!
How did I create the video? Well, the first thing you need for this is 3D content. That could be of artificial nature but that would be a bit boring. Luckily, I developed a smart stereo camera for my PhD which directly generates 3D data 🙂 You can see a photo of it here:
It’s a neat little device that does the stereo processing right on the device so that it delivers directly 3D data. Alternatively, you could also use something like a Kinect. This would even be a better choice as it delivers much more 3D points than a stereo camera since it is an active sensing device.
Here you see a color coded version of the camera output: (Note that the quality in this case is really bad. I haven’t had the chance to calibrate the camera and it was also too dark for archiving good quality 3D data…)
If you have your 3D data ready, you can then transform it into a stereogram. And the code to do that is quite compact. A good reference on how to do that is described in the paper “Displaying 3D images: Algorithms for single-image random-dot stereograms” by Thimbleby et al. 1994. There is even code in that paper!
That’s all there is to do to create a stereogram video. I found it quite intriguing being able to display 3D video on a 2D screen without any special tools like glasses, etc. Unfortunately I cannot publish the source code for this project since I’ve written it inside an obscure framework that is not publicly available. But you’ll find everything you need in the paper I cited.