Skip to content

Visually appealing time-lapse videos

December 12, 2015

This post presents a Python script that automatically selects “fitting” images for a time-lapse video

Update 12/13/2015: Some people asked for example videos. Unfortunately, I only have a single set of data that I cannot publish here. If someone would like to see results of this script and provide me with data (of course with several images per day so that there is something to select from) I would process it and present it here.

Most people that have created an outdoor time-lapse video will have encountered the problem of flashing video due to sunny images followed by cloudy images or vice versa. One common way to get around this problem is to shoot more than one image per day and then select the best fitting images. But this can be quite a lot of work. If you shoot a picture each 30m you’ll end up with close to 20,000 images per year. And that would take you a while to select the right images.

Therefore, I wrote a simple script that selects one image per day that fits “best” to the day before. The script then continues to the next day and finds the “best” compared to the “best” of the previous day.

The obvious question is now: What is the “best” image? As said before, it should contain as little change in brightness as possible. Also, the change in color should be not so big.

I used quite a hacky approach that is far from being optimal but works well enough for me. I create the sum of absolute differences (SAD) over all pixels for the reference image compared to the candidate images. The SAD just subtracts all pixels of the reference picture from the candidate pictures. This absolute value of this difference image is then summed together to end up with a single score of similarity. The picture pair with the smallest score is considered to be most similar. This SAD is created for all three color channels separately to also get some simple kind of color comparison into the process.
One important step that I have not yet mentioned is the preprocessing of the images. Taking the SAD of the raw camera images is not the best idea. The pixel values of an exact position (x,y) of the two images have usually little in common. There are several reasons for this:

– Sensor noise
– Small camera/scene movements (think of a moving leaf)

Ideally, these effects should not have a big influence on the similarity scoring. Therefore, I low-pass filter the images (aka blur) before comparing them. This averaging removes noise as well as small movements. Still, the overall appearance like brightness and color is maintained.

You’ll find the code here.

An example on how to use it:

hg clone https://bitbucket.org/befi/timelapseselector/
cd timelapseselector
mkdir motion sel
cp /mnt/sdb1/*.jpg motion #change this accordingly
python sel.py

Now the script will run through the pictures in the folder “motion” and create symbolic links to fitting images in the folder “sel”. You might want to adapt the parameters inside the script like the number of images per day and the start image of the first day.

From → Uncategorized

4 Comments
  1. Surely this is an ideal blog post to contain some images?

    • Yes, I totally agree! Unfortunately, I only have a single set of data that I cannot publish here. If you have any data that you can send me I would very much like to process it and present it in this post.

Trackbacks & Pingbacks

  1. 2 – Visually appealing time-lapse videos
  2. Bookmarks for February 6th | Chris's Digital Detritus

Leave a comment