Skip to content

FPV in 4K

This post describes how to achieve 4K resolution with a wifibroadcast-based FPV system using H265 encoding. The improved encoding can also be used to get better image quality or reduce bitrate at lower resolutions.


Note: This writeup will not give you a complete image that you can burn on your SD card. It shows a way that makes 4K FPV possible. And due to the image sensor used, it is not traditional 16:9 but instead 4:3 format.

Most of the readers of my blog will know the wifibroadcast system that I developed four years ago. Unfortunately, due to lack of time I was not able to continue working on wifibroadcast anymore. But the community came to my rescue: Many many others have taken over the idea and created incredible images based on the original wifibroadcast. They extended it with so much functionality, broader hardware support, etc. I was really happy to see what they made out of the project.

So as a small disclaimer: This post does not mean that I will pick back up development on the wifibroadcast project. It is just a small hint to the community about new potentials for the wifibroadcast project. These potentials follow of course the wifibroadcast philosophy: To use cheap readily available parts and to make something cool out of them ๐Ÿ™‚

The choice of single board computer for wifibroadcast

Why did I choose Raspberry PI for wifibroadcast? Obviously, due to it’s low price and popularity? Wrong. This was just an extra bonus in the Wifibroadcast soup. The main motivation was that it had a well supported video encoder on board. Video encoding is one of the key ingredients that is required for wifibroadcast. It is not important whether you use MIPS or ARM architecture, USB or Raspberry camera, Board A or Board B… the only things you really need is a video encoder and a USB 2.0 interface.

And in terms of video encoding, the stakes are rather high in case of Raspberry PI. The encoder is so well integrated, it just works out of the box. It has gstreamer support if you want to build your own video pipeline and if not, the camera app directly delivers compressed video data. It cannot get any easier.

Since the original wifibroadcast I was always on the lookout for another SBC with a well supported video encoder. Always with the hope in mind to lower the latency. Unfortunately, my latest find does not seem to deliver improvements in that area. Instead, it improves in terms of image resolution and coding efficiency by using H265 instead of H264 (Raspberry PI does not support H265 and also cannot go beyond 1920×1080 resolution).

NVIDIA Jetson Nano

Some weeks ago, NVIDIA announced the Jetson Nano, a board targeted towards makers with a rather low price tag of $99. The following features caught my attention:

  • Raspberry PI camera connector
  • Well supported h264 and h265 support (through gstreamer)

I could not resist and bought one of these boards. Spoiler: It does not seem to improve in terms of latency but definitely in terms of resolution and bit-rate.

Other nice features of this board is the improved processor (A57 quadcore compared to RPI A52), improved RAM (4GB DDR4 vs 1GB DDR2), improved GPU (128 core vs 4?).

Camera modes

The Jetson Nano supports only the Raspberry camera V2. With this, it provides the following (for Wifibroadcast most useful) camera modes

  • 4K (3280 x 2464) at 21fps
  • FHD (1920 x 1080) at 30fps
  • HD (1280 x 720) at 120fps

In theory, it should also support other frame rates but I could not manage to change this setting. But I did not really try hard so it is probably my fault.

Latency measurements

I did some very quick latency measurements. The setup that I used was my Laptop with gstreamer and a wired Ethernet connection between Laptop and Jetson. The reason for a wired connection instead of Wifibroadcast is that my flat is heavily polluted on all wifi bands. If I had used Wifibroadcast transmission, the latency from the encoding and the transmission in the polluted environment would have mixed together. Using an Ethernet connection was a simple way of separating the encoding latency since I was only interested in that.

The measurement setup was rather crude, I used the TestUFO page and took a picture of the setup to measure the time distance like shown here (right: “live view”, left: “transmitted view”):

This setup introduced of course a couple of variables. Screen update latency and decoding latency of my laptop, update speed of my smartphone displaying the TestUFO page. Not quite ideal… but good enough for a first impression.

With this setup, I determined the following latency (only one measurement point per setting):

H265:

  • 4K 21fps: 210ms
  • FHD 30fps: 150ms
  • HD 120fps: 140ms

H264:

  • 4K 21fps: 170ms
  • FHD 30fps: 160ms
  • HD 120fps: 160ms

Interpretation of latencies

One thing to note is that the CPU of my laptop was smoking quite a bit while decoding H265 or the high frame rates. So in case of 4K H265 I would definitively expect improvements if you would use a proper decoding device (like a smartphone or even a second “decoding Jetson”).

Otherwise, I would say that the latencies are definitively usable for FPV scenarios. I think it would be worth to invest more work into the Jetson Nano for Wifibroadcast.

Command lines

Receiver: gst-launch-1.0 -v tcpserversrc host=192.168.2.1 port=5000 ! h265parse ! avdec_h265 ! xvimagesink sync=false

Transmitter: gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM),width=3280, height=2464, framerate=21/1, format=NV12’ ! omxh265enc bitrate=8000000 iframeinterval=40 ! video/x-h265,stream-format=byte-stream ! tcpclientsink host=192.168.2.1 port=5000

Some remarks: To use H264, all occurrences of H265 need to be replaced in the command lines. To change the resolution, the parameter of the TX command needs to be adapted. Lowering resolution results in a cropped image, meaning a smaller field of view.

Quality comparison

One important question is of course: What do you gain from 4K+H265? I extracted some stills out of the video streams to compare things:

The difference in quality gets quite clear if you switch back and forth between h264 and h265 (note: only 256 colors due to gif format):

The bad results from H264 are expected: Even at high bitrates the format is not really intended for 4K resolution. And Wifibroadcast usually runs at even lower than usual bitrates (all recordings in this post use Wifibroadcast-typical 8mbit/s)

Sample recordings

Here you can find some raw sample recordings. They have been created with the following command line (resolution and codec changed accordingly):

gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM),width=3280, height=2464, framerate=21/1, format=NV12’ ! omxh265enc bitrate=8000000 iframeinterval=40 ! video/x-h265,stream-format=byte-stream ! filesink location=/tmp/4k.h265

https://www.file-upload.net/download-13588131/4k_h264_static.mp4.html
https://www.file-upload.net/download-13588129/fhd_h264_static.mp4.html
https://www.file-upload.net/download-13588130/hd_h264_static.mp4.htm

https://www.file-upload.net/download-13588134/hd_h265_static.mp4.html
https://www.file-upload.net/download-13588133/fhd_h265_static.mp4.html
https://www.file-upload.net/download-13588135/4k_h265_static.mp4.html

https://www.file-upload.net/download-13588139/4k_h265_moving.mp4.html
https://www.file-upload.net/download-13588141/4k_h264_moving.mp4.html

(Please excuse the use of such shady file hosting site… )

Summary

In summary I can say: Yes, the Jetson Nano is a useful SBC for a Wifibroadcast transmitter. It also is very likely a good candidate for a Wifibroadcast receiver. Plenty of processor power plus a (compared to Raspberry) capable GPU to draw even the fanciest OSD.

The weight and size might be a bit too high for certain applications. But there are options here as well. The heat sink (being the most heavy part of the system) could easily be replaced with something lighter (since you have lots of airflow on a drone). Also, the Nano is in fact a System-on-Module. Meaning: The whole system is contained on a small PCB the size of a Raspberry PI Nano. The mainboard of the Nano is mostly just a breakout board for connectors. A custom board or even a soldering artwork of wires might turn the Nano into a very compact, very powerful system.

Also, the 4K resolution + H265 seems to improve Wifibroadcast quality quite a bit. Together with a suitable display device (high resolution smart phone for example) this has the potential to boost the video quality to a completely new level.

I really hope that the hint from this post will be picked up by somebody from the community. My gut feeling says that there is definitively potential in this platform and that my crude latency measurements of this post can be improved quite a bit (by using a different receiving unit and/or parameter tuning).

If someone has something that he would like to share, please add a link to the comments. I will then happily integrate this link into this post.

Advertisements

Teletype ED1000 signal generation with PC sound card

This post describes the generation of a teletype signal with nothing but a PC and a Python script

Over the Christmas holidays I restored a SEL Lo2000 teletype from 1978. The device was sitting in the attic for around 25 years and was in a pretty bad shape. The mechanics of the printer was worn out, leading to colliding and bent types, the electronics were non-functional (broken inductor in clock signal path) and the punch for the punched paper tape was not working (dried out lubricants, misaligned mechanical parts, …).

After putting everything back together I wanted to use the device. But… the network for it has been switched off in the year 2007. Luckily, there is an alternative to it called “i-telex”. The project provides hardware for interfacing the teletypes and software for creating a virtual teletype network over the internet. Very nice.

However, by the time the hardware would have arrived my Christmas holidays will be long over. So I had to find a quicker and simpler way. And this is what this post is about.

Transmission line standards

The early teletypes used a current loop to signal the data. In this TW39 standard, the line provided 40mA for a ‘1’ and 0mA for a ‘0’. The teletype could use this current to actuate an electromagnet for reception or interrupt the current in a certain pattern for transmission. So this standard was quite a good fit for the early mechanical teletypes. However, the standard was sub-optimal for the communication network. For one, it required two lines (current in & current out). In addition to that, the communication also requires DC-coupling. These two facts mean that you are always tied to a special teletype network that is completely separate from the telephone network. You cannot use this signaling over telephone lines.

To overcome the problems of the TW39 standard, Siemens introduced the ED1000 standard (which is quite similar to V.21). In this standard, a ‘1’ is represented by a 700Hz sine wave and a ‘0’ by a 500Hz sine wave (from network->teletype). This signaling removes all the disadvantages of the TW39 standard and also allows to use full-duplex (since the teletype->network signaling uses different frequencies).

ED1000 generation using Python + sound card

So… ED1000 uses 500 and 700Hz tones? Well, that’s what sound cards are made for ๐Ÿ™‚

I developed a Python script that generates these tones and modulates them based on the input. It supports all characters defined in the CCITT2 alphabet with automatic switch between figures and letters. You can either pipe text files into the program or just start it standalone and directly type into it. To get it:

git clone https://befi@bitbucket.org/befi/ed1000.git

Then you can simply start it, type something and listen to the modulated output:

python3 ed1000.py

Piping a text file into the program works as well:

cat mytext.txt | python3 ed1000.py

The connection to the teletype can be made via a simple adapter cable from 3.5mm to the connector of the teletype (in my case Ado8 pins 1 and 4. Pins 5 and 6 need to be bridged as well). Remember to turn the volume all the way up to have the maximum signal swing at the output.

The refurbished teletype as well as the ED1000 script can be seen in the following video:

Outlook

So far the script can only transmit data. It would be also nice being able to receive data from the teletype over the microphone input of a sound card. With that functionality the final goal would be to connect with the script to the i-telex network to realize a fully working teletype.

Smartpen with 128×64 RGB POV display

This post describes a Smartpen with a 128×64 RGB POV display embedded into the pen’s clip.

Sometimes while sitting in a meeting, I see some people staring for 30s or more onto their watch. To me this always looks a bit like a first grader that is learning how to read the clock. So you can guess by now that I am not a big fan of Smartwatches.
But still, the ability to check for messages without looking at your Smartphone seems to be desirable, especially while sitting in a meeting.
This motivated me to think of alternatives to the Smartwatch. Of course, it should be an everyday object that is improved by some “smartness”. Thinking again of the meeting situation, the only other typical object I could think of would be a pen.
Of course, the surface of a pen is a bit too small to add a high resolution display to it. And in addition, this would be quite boring. Therefore, I wanted to add a high resolution POV display by embedding a line of LEDs into the clip of the pen.
Sometimes, I just wiggle normal pens between my fingers. With the addition of the LED line in the clip I would then be able to display a high resolution image. And that is my idea of a Smartpen.

Here you can see a prototype of my Smartpen:

The pen contains 192 LEDs that form a line of 64 RGB dots in the clip of the pen. This creates a display with a density of 101 PPI, which is extremely high for a POV display. In fact, during my research I have never seen a POV display with a similar pixel density. The highest I found had 40 PPI monochrome. The following picture shows an image that is displayed by the pen (real world size: 20x20mm):

This display together with a Bluetooth module that connects to your Smartphone would then create the Smartpen that could show you new messages, emails, etc.

As you can imagine, embedding almost 200 LEDs that are individually addressable into a pen as well as a pixel density of 101 PPI is quite challenging. This post will describe how I addressed these challenges.

Please note that my Smartpen is more a proof-of-concept than a usable device. Several issues together with a lack of time had stopped me from finishing the project. At the end of this post I will explain in more detail these issues.

LEDs

I explored many options for the type of LEDs in this project. It was quite clear from the beginning that 200 discrete LEDs are not an option due to limited space. Also, the standard “smart LED” WS2812 with its 5x5mm footprint is way too big for this project. In addition to that, the chip of the WS2812 uses a single-wire protocol that is more complicated to use than SPI and is also much slower to update. Update speed is quite crucial in this project. With an image width of 128px and an update rate of 15fps, each LED would need to be updated at a rate of 2kHz.

After a bit of searching I was quite happy to find the APA102-2020. This RGB LED comes in a tiny 2x2mm package and has a normal SPI bus. So it was perfect for my application. That’s why I’ve bought a couple of hundreds of them.

But… when they arrived I noticed something odd. They worked as expected but there was one tiny thing that was not according to the datasheet: The PWM frequency of the LEDs. Being not like the 20kHz mentioned in the datasheet it was more in the lower kHz region. The following image shows this problem. In that picture, I did set the green LED to half intensity (constantly on) and moved it quickly:

As you can see, the LED shows a dotted line instead of a solid one. And the dot pattern is exactly the PWM pattern. This is of course very annoying because if would mean that the display will have this dot pattern for all color values that are not 0xff.

After a bit of research I found that the manufacturer of the LEDs has at some point replaced the driver chip inside of the LED. The old one has the fantastic 20kHz PWM, the new one has the shitty PWM. It is quite easy to find out which one of the drivers the LEDs have: The older driver chips are larger. You can see this on the following picture (left: new, right: old):

Since I already bought hundreds of LEDs I continued the project nonetheless: I designed a small PCB that daisy-chains 16 of these LEDs:

The PCBs can be daisy-chained as well:

One of these LED stripes already showed its suitability as a POV display:

In the “hearts” image you can see a typical problem of POV displays: The spacing between the individual dots due to the spacing of the individual LEDs. Because of that, POV displays have typically this “lineish” appearance – not very pretty. The next section describes how I fixed this.

The clip of the pen

So, how can you fit 64 LEDs of size 2mm into the clip of a pen that is only 30mm long? And how do you avoid this “lineish” appearance of POV displays? The answer is simple: Fiber optics!

Fiber optics take the light of the LEDs and route it through the pen to the clip. On the clip, each of the tiny 250um fiber optics is one dot of the display.

First, I 3D-printed with flexible filament a holder for four LED PCBs:

The paper stripes on the left are templates that I glued onto the 3D print. After that I used a needle to punch a hole into the 3D printed part for each fiber. Then it was very easy to put a fiber through the hole and glue it onto the 3D part:

This assembly can be rolled together and the fibers then exit through the clip of the pen:

If you look closely at the picture above, you can see the fibers in the center of the clip glowing.

The following video shows the display unit in action:

If you look closely you can see that the LEDs and the fibers on the clip are not in the same order. I tried several times to align the fibers in the right order at the clip but I always failed. Since you have to move the fibers very close together they have a tendency of jumping over other fibers and screwing up the order. Therefore I just glued them in, ignoring the order and sorted things out by reordering them in software.

Display unit + IMU + Raspi

The last bit needed to use the display unit as a POV display is a way to determine the position of the clip. For this purpose, I used a MPU6050 IMU. You can see the IMU on a breakout board glued onto the pen in the first picture of this post. Actually, the 6 axis IMU is a bit overkill. For the display application I only used one axis of the gyroscope. While wiggling the pen between your fingers you can easily detect the turning points by looking at high angular accelerations.

All the software needs to do is to start displaying the image column by column when it detected the left turning point. Upon the detection of the right turning point the image is displayed in reverse order, again column by column.

I implemented the software on a Raspi since this allows for much faster development compared to a microcontroller. All you need to do is to connect the IMU to the I2C bus and the display unit to the SPI bus of the Raspi.

I created the images to display on the pen using Gimp and exported them as C header (yes, Gimp can do this directly, a very handy feature!).

As usual, you can find the source code, the Kicad files and the SCAD files for the mechanics in a Bitbucket repository:

git clone https://bitbucket.org/befi/pen15

Please note that the repository contains a later version of the mechanics compared to the pictures of this post. In the new version, the LED PCB holders are split into two parts (one holding the PCB, one holding the fibers) that are to be glued together. The motivation for this was to be able to polish the fibers after gluing them in for having a more consistent brightness between individual fibers.

Open issues

LED PWM frequency

Like mentioned above, the PWM frequency of the APA102-2020 is too low for this application. You can see this especially on the eggplant image above. The body of the original eggplant is filled with a solid color. Since this color is different from 0xff, we get this strange dotted pattern – not very nice.

(Side note to the image: There is a dead fiber at the top of the font. It broke during some reworks on the mechanics)

There are basically only two ways around this: Either limit yourself to 8 colors (using either 0x00 or 0xff per channel) or use different LEDs. Like mentioned before, there are (most likely earlier) variants of the APA102-2020 that do not have this PWM issue. However, I was unable to obtain them. I ordered my LEDs from Digikey and Adafruit and received always the flawed LEDs.

Crosstalk

Crosstalk happens because the light of a LED is forwarded by one of its neighbor fibers. You can see the effect also in the eggplant image, especially at the font. The “shadow” effect seen at the bottom comes from crosstalk.

Working around this issue should not be too hard. In fact, the hardware files that are in the latest revision of the projects repository should already help a lot. Instead of white flexible filament they use normal black PLA. This should remove much of the crosstalk created by translucence and reflectivity of white PLA.

Inconsistent brightness

Keeping the brightness constant between the fibers is quite tricky. You can see the effect in the font of the eggplant image. Some lines are brighter than others. For getting the brightness right, the placement (in all 3 axes) relative to the LED as well as the orientation of two of the angles of the fiber needs to quite good. Also, the end of the fiber needs to be polished so that you get equal light transmission. I think (especially with the latest hardware revision that allows to polish the fibers after they are glued in) this issue is solvable… but tedious.

Heat

Do not underestimate the heat that is generated by 200 LEDs! I learned this the hard way. Appareantly, the LEDs do not have a proper reset circuit. So when they are powered on, the LEDs show random values. I once turned the system on without starting the software (that resets them) and left it running for a couple of minutes. The result was that by 3D printed parts were molten.

I think if you take care that the LEDs are not constantly on, the heat generation is manageable. But better add a watchdog for a final usage ๐Ÿ™‚

Future extensions

Bluetooth

Obviously, a Smartpen without any way of communication wouldn’t be that smart. It could maybe show the time but that would be it. Therefore, my plan was to use a Bluetooth Low Energy SOC like the Nordic nRF51822. Using BLE could allow the pen to stay paired with a smartphone for years on a single battery.

Handwriting recognition from IMU data

A nice addition of the Smartpen would be a way to add interaction. Simple interaction might be realized by using the IMU to send basic commands. However, more complex interactions might need more powerful approaches.

One thing I would be curious to try out would be the addition of a pressure sensor (pressure of pen onto paper). I could imagine that by combining pressure readings together with IMU readings one could realize a rudimentary handwriting recognition. It would indeed be a good task to throw some Tensorflow magic at ๐Ÿ™‚

Interaction using handwriting recognition

Once you are able to recognize written characters, you could use that power also as a way for interaction. You could use for example arrows to navigate through menues.

Summary

The intention behind this post was to share my idea of a Smartpen and communicate my approach to it, including the difficulties I faced. Like I said before, there is still a lot of work to be done until it is a useful device. I hope very much that someone will pick up the project from here on. In case you have any additional questions, please drop a comment here and I will try to answer them.

Wireless UART with nothing but a microcontroller

This post describes how to add a wireless UART transmitter to STM32 microcontrollers. This works with zero additional parts since the RF signal is generated directly by the STM32. Also, a fitting receiver realized with a RTL software defined radio is shown.

Currently I am working on a project that is very constrained in terms of physical size and cost. In that project I need to send the temperature wirelessly over a short distance. My normal approach would be to use something like a nRF24L01 together with a microcontroller and temperature sensor. However, this is not possible since there was neither the space nor the “money” for that solution. Microcontrollers combined with radios were also not possible due to their price. Therefore I thought about alternatives and found this: http://mightydevices.com/?p=164 . In that post the author used a STM32 to transmit Morse codes to a FM radio. Since in this approach the STM sends at 100MHz the question if it is legal is easy to answer… But I liked the general idea. Just route the STMs PLL to the MCO output, attach a wire to it and you are done with your transmitter.

I used the very same idea but tweaked the output frequency to 27MHz, which is an ISM band (so it is legal to send there). Instead of sending audio I simply send data like on a serial port using ASQ modulation (meaning: a one bit turns the transmitter on, a zero bit turns it off), hence the name of the project is WUART (wireless UART).

This is not only saving money and board space but can also be a very handy debug interface while developing the software on the STM32. No need to attach any wires, UARTs, FTDIs and whatnot. Just receive your debug output wirelessly ๐Ÿ™‚

Transmitter

I implemented a simple software that reads the internal temperature sensor of the STM32 as well as VDD and prints these numbers in a human readable format over the WUART twice a second.
The code is designed to run on a STM32F051R8 (like on the STM32F0 discovery board) and outputs a frequency that is 4 times that of the HSE’s frequency. So ideally, the HSE clock should be 6.75MHz to achieve a transmission at 27MHz.

To build the project:

sudo apt-get install gcc-arm-none-eabi gdb-arm-none-eabi binutils-arm-none-eabi libnewlib-arm-none-eabi
hg clone https://bitbucket.org/befi/wuart
cd wuart/stm32/libopencm3
make
cd ../projects/wuart
make

Please note that we are outputting a square signal onto the MCO pin. This means that we will create lots of harmonics. It is advised to only use this in a shielded environment.

Receiver

As a receiver any RTL SDR USB dongle can be used that supports the reception of the frequency that is transmitted by the STM32 (check https://www.rtl-sdr.com/about-rtl-sdr/ for models and their frequency ranges).

I’ve created the following GRC graph to receive the data (also included in the repository):

Explained from left to right:

  • RTL-SDR source: This module is the driver of the RTL stick. It delivers the raw IQ samples at a rate of 2M. You need to adapt the frequency here depending on the frequency of your transmitter
  • Frequency Xlating FIR filter: This module allows you to shift the frequency of the signal in software (instead of reconfiguring the RTL hardware). Use this to fine-tune the frequency
  • Low Pass Filter: The low pass filter performs a low pass filtering followed by a decimation by the factor of 10. The decimation helps to remove unnecessary system load
  • Complex to Mag: This module converts the complex IQ samples to a real number that can be interpreted as signal strength
  • RMS, Divide and Threshold: These modules perform the ASQ decoding with an adaptive threshold. It looks complicated but is quite simple:
    • The RMS module calculates an averaged RMS. This is used to determine the noise level of your environment (the noise that is present when the transmitter is not transmitting)
    • The divide module calculates the ratio of the currently received signal and the averaged background level determined by the RMS module
    • The threshold module looks at the ratio of current signal and averaged background level. If the current signal is 3x higher than the background noise, we interpret the signal as a ‘1’, otherwise as a ‘0’
  • The thresholded ‘bits’ are then outputted onto STDOUT for further processing. Since we are generating these bits at a rate of 200kHz we will get many of these bits per actual payload bit (since the transmitter is sending the payload bits at a much lower rate)

ย 

The stream of bits that falls out of the GNU radio schematic still needs to be “framed” into bytes. For this purpose, the repository of this project also contains a framing software in the subdirectory “dec”. The dec software receives the oversampled raw bits from GNU radio via STDIN (so GNU radio and dec can be piped together) and transforms them into actual payload bits. This works simply by sampling the raw bit that is expected to be in the center of a payload bit. The payload bits are then grouped into bytes and outputted to STDOUT.

To run everything, simply pipe the python file generated by GRC into the dec program.

If everything works as expected, you should see the temperature and voltage measurement strings at the output of your terminal.

Please note that the GNU radio program might need some adaptions regarding the frequency and thresholding. It therefore also includes debugging tools to tap into the intermediate results.

ย 

TS100 Oscilloscope hack

This post describes my modification of a TS100 soldering iron that transforms it into an oscilloscope. The changes are mostly made in software.

Readers of my blog will know that I have a weakness for modifying things to serve a purpose they never have been designed for. And admittedly, this project is one of the most stupid/nicest projects I’ve done recently. Why stupid? Well, I transformed a soldering iron into an oscilloscope… see for yourself:

As you can see in the video, you can use the soldering tip as your measurement probe. Coincidentally, a soldering iron has already a pretty good form factor for an oscilloscope. Here is a still picture of a UART waveform:


DSCF4042

Right now, the user interface is very minimalistic. The vertical axis can be scaled simply by pressing and holding one of the buttons and then turning the iron around its axis. In the same manner the horizontal axis can be scaled by tilting the iron up and down. Using this as the main interface makes the oscilloscope surprisingly usable.

Of course, the iron can be used as a normal iron or as an oscilloscope without any changes to the hardware. The oscilloscope is simply a new menu option.

What and how

Even without any changes the TS100 is a very nice portable soldering iron. I use it as my main soldering tool. But what makes it special is that it contains a powerful 32 bit ARM processor, a graphical OLED display, open source firmware and a schematic that is available. And this makes it a very good target for this project.

The oscilloscope function requires a little change of the irons hardware. A single wire needs to be connected from the earth connection of the tip to one of the buttons. This “forwards” the voltage of the tip into an analog input of the ARM. The modification can be seen in the following image:


DSCF4029

Using this direct connection is rather dangerous because every voltage on the tip directly reaches the ADC input pin of the ARM. For a real-world use-case I would recommend to add a resistor in series and a TVS protection diode before the ADC pin.

Limitations

Since I’ve added no front end analog circuit to the soldering iron, it is limited to what the ADC of the ARM is able to measure: 0 to +3.3V. This is of course very poor for an oscilloscope. A cheap modification would be to add also a resistor divider in front of the ADC for being able to measure higher voltages. A 10:1 divider should be a good compromise. It allows you to measure up to 33V. Also, since the ADC has a resolution of 12 bit you should still have a sufficient resolution of 8mV/LSB.

Also, the software is still not in a usable shape. I’ve just hacked some stuff into the existing firmware. For reference, you can find the code here:

https://bitbucket.org/befi/ts100scilloscope

Do NOT use it on your production soldering iron. Currently, it does not let you use the soldering iron and the oscilloscope code is nothing but horribly wrong. Why did I start with such a crappy implementation? Well, it was thought of as a demonstrator to check if the idea is viable. Unfortunately, the original firmware would require rewrites of large portions of it to support high frequency ADC sampling and I am not really in the mood of investing so much effort in a code basis of questionable quality.

Instead, I will start a clean room implementation of the firmware. This will then also use the new UI interactions that I have demoed for the oscilloscope. For example, all the values like target temperature will be simply settable by pressing and holding a button while turning the iron. Turning in one direction will increase the temperature, turning it to the other direction will decrease it.

Also, forwarding the oscilloscope data over USB is an option. This would allow to display the measurements on your Android device for example.

A simple multimeter will of course also be added.

At the end, what specs could you expect from it? Roughly:

  • 0 to 30V input range
  • 2MSPS
  • >30Hz display update rate

Summary

In this post I’ve shown you a demonstrator of a soldering iron turned into an oscilloscope. The obvious question for this project is of course: Why??

Well, it probably won’t replace your bench oscilloscope. But having even the simplest kind of oscilloscope at hand while you are away from the bench can be a real lifesaver. Some examples of questions to which the soldering iron oscilloscope can give you an answer:

  • Is there a signal at all?
  • Does the timing of the look sensible?
  • Does the voltage look sensible?

And I would say that (at least in the digital domain) 80% of problems can be eliminated by having the answer to these questions. Things like:

  • I did configure the UART transmitter wrong. It outputs its data on a different pin
  • I see a valid UART signal so the problem must be at the receiving end
  • Normally, the 3.3V I2C bus shouldn’t idle at 1.2V!? Did I forget to enable the pullup resistors?
  • Why is there no ACK bit from the I2C device? Am I sending data to the correct address?

ย 

Once there is a usable alpha version of the rewritten firmware available I will publish a link to it in my blog. Please subscribe if you want to be notified about this.

ย 

ย 

Wifibroadcast RPI FPV image v0.4

I have a small timeslot until my new year’s eve party begins so I thought I could use it to upload the newest Wifibroadcast RPI FPV image: https://github.com/befinitiv/rpi_wifibroadcast_image_builder/releases/tag/v0.4.

The changes are:

  • Merged TX and RX images. You can write this image file onto the SD cards for both RX and TX Raspberries. The devices change their roles accordingly depending on whether a camera is connected or not. In short: Raspberry with camera behaves like a TX device, Raspberry without camera behaves like a RX device.
  • Removed message floodย of TX to systemd journal to avoid growth of log files. This allows for long running TX devices.

 

The most important change is point 1: This means for you that you only have to download one image instead of two. Also, there is no need to mark the SD cards with TX and RX since they are all the same. This makes things much easier.

I wish you a happy new year! Hopefully with lots of HD FPV fun ๐Ÿ™‚

 

Visually appealing time-lapse videos

This post presents a Python script that automatically selects “fitting” images for a time-lapse video

Update 12/13/2015: Some people asked for example videos. Unfortunately, I only have a single set of data that I cannot publish here. If someone would like to see results of this script and provide me with data (of course with several images per day so that there is something to select from) I would process it and present it here.

Most people that have created an outdoor time-lapse video will have encountered the problem of flashing video due to sunny images followed by cloudy images or vice versa. One common way to get around this problem is to shoot more than one image per day and then select the best fitting images. But this can be quite a lot of work. If you shoot a picture each 30m you’ll end up with close to 20,000 images per year. And that would take you a while to select the right images.

Therefore, I wrote a simple script that selects one image per day that fits “best” to the day before. The script then continues to the next day and finds the “best” compared to the “best” of the previous day.

The obvious question is now: What is the “best” image? As said before, it should contain as little change in brightness as possible. Also, the change in color should be not so big.

I used quite a hacky approach that is far from being optimal but works well enough for me. I create the sum of absolute differences (SAD) over all pixels for the reference image compared to the candidate images. The SAD just subtracts all pixels of the reference picture from the candidate pictures. This absolute value of this difference image is then summed together to end up with a single score of similarity. The picture pair with the smallest score is considered to be most similar. This SAD is created for all three color channels separately to also get some simple kind of color comparison into the process.
One important step that I have not yet mentioned is the preprocessing of the images. Taking the SAD of the raw camera images is not the best idea. The pixel values of an exact position (x,y) of the two images have usually little in common. There are several reasons for this:

– Sensor noise
– Small camera/scene movements (think of a moving leaf)

Ideally, these effects should not have a big influence on the similarity scoring. Therefore, I low-pass filter the images (aka blur) before comparing them. This averaging removes noise as well as small movements. Still, the overall appearance like brightness and color is maintained.

You’ll find the code here.

An example on how to use it:

hg clone https://bitbucket.org/befi/timelapseselector/
cd timelapseselector
mkdir motion sel
cp /mnt/sdb1/*.jpg motion #change this accordingly
python sel.py

Now the script will run through the pictures in the folder “motion” and create symbolic links to fitting images in the folder “sel”. You might want to adapt the parameters inside the script like the number of images per day and the start image of the first day.