Skip to content

TS100 Oscilloscope hack

This post describes my modification of a TS100 soldering iron that transforms it into an oscilloscope. The changes are mostly made in software.

Readers of my blog will know that I have a weakness for modifying things to serve a purpose they never have been designed for. And admittedly, this project is one of the most stupid/nicest projects I’ve done recently. Why stupid? Well, I transformed a soldering iron into an oscilloscope… see for yourself:

As you can see in the video, you can use the soldering tip as your measurement probe. Coincidentally, a soldering iron has already a pretty good form factor for an oscilloscope. Here is a still picture of a UART waveform:


DSCF4042

Right now, the user interface is very minimalistic. The vertical axis can be scaled simply by pressing and holding one of the buttons and then turning the iron around its axis. In the same manner the horizontal axis can be scaled by tilting the iron up and down. Using this as the main interface makes the oscilloscope surprisingly usable.

Of course, the iron can be used as a normal iron or as an oscilloscope without any changes to the hardware. The oscilloscope is simply a new menu option.

What and how

Even without any changes the TS100 is a very nice portable soldering iron. I use it as my main soldering tool. But what makes it special is that it contains a powerful 32 bit ARM processor, a graphical OLED display, open source firmware and a schematic that is available. And this makes it a very good target for this project.

The oscilloscope function requires a little change of the irons hardware. A single wire needs to be connected from the earth connection of the tip to one of the buttons. This “forwards” the voltage of the tip into an analog input of the ARM. The modification can be seen in the following image:


DSCF4029

Using this direct connection is rather dangerous because every voltage on the tip directly reaches the ADC input pin of the ARM. For a real-world use-case I would recommend to add a resistor in series and a TVS protection diode before the ADC pin.

Limitations

Since I’ve added no front end analog circuit to the soldering iron, it is limited to what the ADC of the ARM is able to measure: 0 to +3.3V. This is of course very poor for an oscilloscope. A cheap modification would be to add also a resistor divider in front of the ADC for being able to measure higher voltages. A 10:1 divider should be a good compromise. It allows you to measure up to 33V. Also, since the ADC has a resolution of 12 bit you should still have a sufficient resolution of 8mV/LSB.

Also, the software is still not in a usable shape. I’ve just hacked some stuff into the existing firmware. For reference, you can find the code here:

https://bitbucket.org/befi/ts100scilloscope

Do NOT use it on your production soldering iron. Currently, it does not let you use the soldering iron and the oscilloscope code is nothing but horribly wrong. Why did I start with such a crappy implementation? Well, it was thought of as a demonstrator to check if the idea is viable. Unfortunately, the original firmware would require rewrites of large portions of it to support high frequency ADC sampling and I am not really in the mood of investing so much effort in a code basis of questionable quality.

Instead, I will start a clean room implementation of the firmware. This will then also use the new UI interactions that I have demoed for the oscilloscope. For example, all the values like target temperature will be simply settable by pressing and holding a button while turning the iron. Turning in one direction will increase the temperature, turning it to the other direction will decrease it.

Also, forwarding the oscilloscope data over USB is an option. This would allow to display the measurements on your Android device for example.

A simple multimeter will of course also be added.

At the end, what specs could you expect from it? Roughly:

  • 0 to 30V input range
  • 2MSPS
  • >30Hz display update rate

Summary

In this post I’ve shown you a demonstrator of a soldering iron turned into an oscilloscope. The obvious question for this project is of course: Why??

Well, it probably won’t replace your bench oscilloscope. But having even the simplest kind of oscilloscope at hand while you are away from the bench can be a real lifesaver. Some examples of questions to which the soldering iron oscilloscope can give you an answer:

  • Is there a signal at all?
  • Does the timing of the look sensible?
  • Does the voltage look sensible?

And I would say that (at least in the digital domain) 80% of problems can be eliminated by having the answer to these questions. Things like:

  • I did configure the UART transmitter wrong. It outputs its data on a different pin
  • I see a valid UART signal so the problem must be at the receiving end
  • Normally, the 3.3V I2C bus shouldn’t idle at 1.2V!? Did I forget to enable the pullup resistors?
  • Why is there no ACK bit from the I2C device? Am I sending data to the correct address?

 

Once there is a usable alpha version of the rewritten firmware available I will publish a link to it in my blog. Please subscribe if you want to be notified about this.

 

 

Advertisements

Wifibroadcast RPI FPV image v0.4

I have a small timeslot until my new year’s eve party begins so I thought I could use it to upload the newest Wifibroadcast RPI FPV image: https://github.com/befinitiv/rpi_wifibroadcast_image_builder/releases/tag/v0.4.

The changes are:

  • Merged TX and RX images. You can write this image file onto the SD cards for both RX and TX Raspberries. The devices change their roles accordingly depending on whether a camera is connected or not. In short: Raspberry with camera behaves like a TX device, Raspberry without camera behaves like a RX device.
  • Removed message flood of TX to systemd journal to avoid growth of log files. This allows for long running TX devices.

 

The most important change is point 1: This means for you that you only have to download one image instead of two. Also, there is no need to mark the SD cards with TX and RX since they are all the same. This makes things much easier.

I wish you a happy new year! Hopefully with lots of HD FPV fun 🙂

 

Visually appealing time-lapse videos

This post presents a Python script that automatically selects “fitting” images for a time-lapse video

Update 12/13/2015: Some people asked for example videos. Unfortunately, I only have a single set of data that I cannot publish here. If someone would like to see results of this script and provide me with data (of course with several images per day so that there is something to select from) I would process it and present it here.

Most people that have created an outdoor time-lapse video will have encountered the problem of flashing video due to sunny images followed by cloudy images or vice versa. One common way to get around this problem is to shoot more than one image per day and then select the best fitting images. But this can be quite a lot of work. If you shoot a picture each 30m you’ll end up with close to 20,000 images per year. And that would take you a while to select the right images.

Therefore, I wrote a simple script that selects one image per day that fits “best” to the day before. The script then continues to the next day and finds the “best” compared to the “best” of the previous day.

The obvious question is now: What is the “best” image? As said before, it should contain as little change in brightness as possible. Also, the change in color should be not so big.

I used quite a hacky approach that is far from being optimal but works well enough for me. I create the sum of absolute differences (SAD) over all pixels for the reference image compared to the candidate images. The SAD just subtracts all pixels of the reference picture from the candidate pictures. This absolute value of this difference image is then summed together to end up with a single score of similarity. The picture pair with the smallest score is considered to be most similar. This SAD is created for all three color channels separately to also get some simple kind of color comparison into the process.
One important step that I have not yet mentioned is the preprocessing of the images. Taking the SAD of the raw camera images is not the best idea. The pixel values of an exact position (x,y) of the two images have usually little in common. There are several reasons for this:

– Sensor noise
– Small camera/scene movements (think of a moving leaf)

Ideally, these effects should not have a big influence on the similarity scoring. Therefore, I low-pass filter the images (aka blur) before comparing them. This averaging removes noise as well as small movements. Still, the overall appearance like brightness and color is maintained.

You’ll find the code here.

An example on how to use it:

hg clone https://bitbucket.org/befi/timelapseselector/
cd timelapseselector
mkdir motion sel
cp /mnt/sdb1/*.jpg motion #change this accordingly
python sel.py

Now the script will run through the pictures in the folder “motion” and create symbolic links to fitting images in the folder “sel”. You might want to adapt the parameters inside the script like the number of images per day and the start image of the first day.

Wifibroadcast RPI FPV images v0.3

The newest Wifibroadcast RPI FPV images are now available under https://github.com/befinitiv/rpi_wifibroadcast_image_builder/releases/tag/v0.3.

The changes are:

  • Support for 5GHz cards like CSL-300 and TP-LINK TL-WDN3200. The images automatically detect the type of WIFI card (2.4GHz or 5GHz) and configure them appropriately.
  • Fixes “Trouble injecting” bug. This occurred for some people on tx side, mostly on 5GHz hardware.

 

Special thanks to Alexandre, Kieran and André who made this release possible with their support!


Commands to execute on Raspberries for bringing v0.2 images up to date (instead of downloading v0.3, requires Internet access on Raspberries):

cd
cd wifibroadcast
hg pull
make clean
make
cd
cd wifibroadcast_fpv_scripts
hg pull

Wifibroadcast RPI FPV images v0.2

Just a quick note: I released new images for Wifibroadcast RPI FPV. You’ll find them here: https://github.com/befinitiv/rpi_wifibroadcast_image_builder/releases/tag/v0.2

The changes are:

  • Init-scripts have been replaced by systemd services. For example, the TX service can now be stopped like this:
    sudo systemctl stop wbctxd
  • New wifibroadcast version: This one supports rx status information
  • New Frsky-OSD: The OSD is now enabled by default showing the signal strength of the receiving cards. If you prefer a plain camera image you can disable the OSD:
    sudo systemctl disable osd
  • Improved Raspberry 2 support for RX

Prebuilt Wifibroadcast Raspberry PI FPV images

This post gives an update to Wifibroadcast: Prebuilt images.

Since the beginning of Wifibroadcast the only method to try it was to manually install and compile the software. To make it easier for people to try out the system I now created prebuilt images. They can be found here: https://github.com/befinitiv/rpi_wifibroadcast_image_builder/releases

To use them you just have to install the images onto SD cards, prepare two Raspberry PIs with camera+TPLINK TL-WN722 as TX and display+TPLINK TL-WN722 as RX and you are done.

I moved the manual installation procedure from the main Wifibroadcast page to here in case you want to install Wifibroadcast onto an existing Raspberry PI image. Making things by yourself is also a good way to get to know the system better.

The images contain the basic features you would expect. Video capture at TX and video display at RX. Also, automatic video recording onto USB sticks and support for a shutdown button is included. FrSky-OSD software is also installed but disabled by default (since it depends a lot on the hardware available).

Automatic image creation

Since creating these images is quite time consuming (and I am lazy…) I automated the whole process. This also helps me and others to understand afterwards exactly what an image contains. And of course, others can put their tweaks into the build system and benefit as well from all the points above.

The following commands are all you need to do to create the TX and RX images:

hg clone https://bitbucket.org/befi/rpi_wifibroadcast_image_builder
cd rpi_wifibroadcast_image_builder
./build_images.sh

The build_images.sh automatically downloads the needed bits such as basic raspbian images, build tools and kernel. The kernel will be patched, compiled and installed onto the base image. This is followed by chrooting with the help of qemu (because Raspberry PI is an ARM architecture) into the image and (natively) install Wifibroadcast and co. Oh and all the configurations like network card settings, enabling of the camera and HDMI mode are also automatically set.

Future plans

Currently, the image only supports 2.4GHz operation. I would like to extend the images to also support 5GHz Wifi sticks and choose the frequency automatically, depending on which sticks are connected. Unfortunately, I do not have compatible 5GHz Wifi sticks available so it is still unclear if and when this will happen.

Latency analysis of the raspberry camera

This post presents results of an analysis looking for the cause of latency in a Raspberry wifibroadcast FPV setup

The last bit a wifibroadcast FPV system would need to kill analog FPV would be better latency. The robustness of the transmission using wifibroadcast is already better, the image quality for sure, just the latency is a bit higher.
This post will not present a solution and will also not measure total system delay. It concentrates on the latencies in the TX PI.

Test setup

For measuring latency easily you need something to measure that is driven by the same clock as the measurement. I decided to go with a simple LED that is driven by a GPIO of the Raspberry. The LED is observed by the camera. This setup can be seen in the following Figure:

IMG_20150909_185403

I wrote a program that toggles the LED and outputs for each action a timestamp delivered by gettimeofday(). This allows me know the time of the LEDs actions relative to the PIs clock.

Capture-to-h264 latency

The latency between the image capture to a h264 image comprises capture and compress latency. Since this both happens hidden by raspivid, it cannot be divided easily. And this is the number we more or less “have to live with” until Broadcom opens up the GPU drivers.

I measured the latency using a modified version of raspivid. The compression engine packs the h264 data into NAL units. Think of them as h264 images that are prefixed by a header (0x00000001) and then written image after image to form the video stream. The modification I added to raspivid was that each NAL unit received a timestamp upon arrival. This timestamp was written right before the NAL header into the h264 stream.

I also wrote a program that was able to read my “timestamped” stream and convert it into single images that are attached with their corresponding timestamps.

For example, the LED toggling program gave me an output like this:

OFF	1441817299	404717
ON	1441817299	908483
OFF	1441817300	9102
ON	1441817300	509716
OFF	1441817300	610361
ON	1441817301	111039
OFF	1441817301	211695
ON	1441817301	717073
OFF	1441817301	817717
ON	1441817302	318342
OFF	1441817302	419034
ON	1441817302	919647
OFF	1441817303	20302
ON	1441817303	520965
OFF	1441817303	621692
ON	1441817304	122382
OFF	1441817304	223078
ON	1441817311	718719
OFF	1441817311	819685
ON	1441817312	320652
OFF	1441817312	421654

First column is the status of the LED, second column the seconds, third column the microseconds.

My h264 decoder then gave me something like this:

1441741267	946995	CNT: 0	Found nalu of size 950 (still 130063 bytes in buf)
1441741267	965983	CNT: 1	Found nalu of size 907 (still 129148 bytes in buf)
1441741267	983183	CNT: 2	Found nalu of size 1124 (still 128016 bytes in buf)
1441741268	3971	CNT: 3	Found nalu of size 1409 (still 126599 bytes in buf)
1441741268	27980	CNT: 4	Found nalu of size 3028 (still 123563 bytes in buf)
1441741268	51698	CNT: 5	Found nalu of size 7005 (still 116550 bytes in buf)
1441741268	68547	CNT: 6	Found nalu of size 9667 (still 106875 bytes in buf)
1441741268	89147	CNT: 7	Found nalu of size 10312 (still 96555 bytes in buf)
1441741268	109650	CNT: 8	Found nalu of size 19244 (still 77303 bytes in buf)
1441741268	138233	CNT: 9	Found nalu of size 19338 (still 57957 bytes in buf)
1441741268	160402	CNT: 10	Found nalu of size 31165 (still 26784 bytes in buf)
1441741268	172178	CNT: 11	Found nalu of size 19899 (still 6877 bytes in buf)
1441741268	195332	CNT: 12	Found nalu of size 25129 (still 105935 bytes in buf)
1441741268	213109	CNT: 13	Found nalu of size 24777 (still 81150 bytes in buf)
1441741268	236775	CNT: 14	Found nalu of size 24657 (still 56485 bytes in buf)
1441741268	259814	CNT: 15	Found nalu of size 24738 (still 31739 bytes in buf)
1441741268	274674	CNT: 16	Found nalu of size 24783 (still 6948 bytes in buf)
1441741268	300793	CNT: 17	Found nalu of size 24855 (still 106209 bytes in buf)
1441741268	314963	CNT: 18	Found nalu of size 18368 (still 87833 bytes in buf)
1441741268	339084	CNT: 19	Found nalu of size 17959 (still 69866 bytes in buf)
1441741268	365756	CNT: 20	Found nalu of size 17958 (still 51900 bytes in buf)

Where the first column represents the seconds and the second column represents the microseconds.

Since the LED was running at 2Hz and the camera at 48Hz I could directly relate the LED event to a specific video frame just by looking at the images (-> is the LED on or off?). This gave me two timestaps, the first of the LED event and the second of the capture of it.

The delay I got out of these was always in the range between 55ms and 75ms. The variation of 20ms makes sense since this is roughly our frame time. Depending on whether I captured the LED at the beginning (longer delay) or at the end of the exposure (shorter delay) the times vary.

Camera settings were: 48FPS, -g 24, -b 6000000, -profile high

Capture latency

I was wondering: Does the 55ms minimum latency come mostly from compression or from capturing? I looked through ways to capture directly and found some nice hack here: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=109137.
Here I did the same trick with the LED and was even a bit more lucky to capture this shot:

led

Notice that the LED is half-on half-off? This is due to the rolling shutter effect of the sensor. By accident I captured the LED right “in the middle” where it turned off. So the capture time of this shot is known to be exactly when the LED switched state (so here we do not have the 55-75ms jitter as in the case above). The delay between event and capture in this case was 38ms. Unfortunately, the camera runs in this hacky mode at 5Mpixel. So this is not exactly the 720p FPV scenario.

Compression latency

To validate my findings from above I also timestamped the hello_encode program included in Raspian. This gave me a compression latency of 10ms for a single 720p frame.

Conclusion

Although my different measurements are not completely comparable to each other I now have a rather clear view on the latencies:

Capture: 40ms
Compression: 10ms
FEC-Encoding+Transmission+Reception+FEC-Decoding+Display: Remaining ~50-100ms (to be confirmed)

One thing I did notice on my experiments with raspivid: It uses fwrite to output the h264 stream. Since this is usually buffered I noticed sometimes about 6KiB being stuck in the buffering. Now that size if far away from a whole frame size so it won’t cause one frame being stuck inside the pipeline. But nonetheless, it will probably cause a small delay.

Another thing I noticed is that the NALU header (0x00000001) is written by raspivid at the beginning of each new h264 frame. Since the decoder needs to wait until the next header to know the end of the frame, a latency of one frame is created (unnecessarily).
Maybe a variant of wifibroadcast that is directly integrated into raspivid would make sense. This could treat whole h264 frames as an atomic block, divide the frame into packages and transmit them. Right now the blocking is done at fixed intervals. This leads to the problem that additional data is stuck in the pipeline due to a partly filled block. I still need some time to think it over but there could be some potential here.