Skip to content

Installing Linux into a 286 laptop from the year 1989

Ever wondered what useful things you could do with a 32 year old laptop? Well, this is one option:

In this project I added a Raspberry PI Zero to the insides of the laptop. Both are connected via a serial link and can exchange data via it. You could use this for several applications:

  • Using the 286 with a terminal emulator as an interface to the Linux of the Raspberry PI. This way you can do the typical Linux shell stuff on a retro machine. With this you are quite far up on the hipster level 🙂
  • Connecting the DOS on the 286 to the Internet
  • Transferring files to the DOS filesystem

Terminal emulator

On the 286 side you need to install MS-DOS Kermit: http://www.columbia.edu/kermit/mskermit.html

On the Raspberry side there is nothing to do. So this is really easy to setup.

Connecting DOS to the internet

The setup needed for this is quite nicely described in https://medium.com/swlh/connecting-a-286-dos-pc-to-the-internet-through-a-serial-connection-in-2019-b93a422ff094

Transferring files

Transferring files can easily be done using the internet tools described above. Alternatively, you could also use the Kermit file transfer for a more direct (and likely faster) transmission of files.

Summary

To summarize, using this machine is quite fun. It is also borderline useful. But the most funny thing is that you can install a complete system into the cracks of the original laptop that is ~100x more powerful than the laptop itself. Crazy times 🙂

Android 9 on a GSM phone from year 2000

This post shows a small nonsense project that “installs” Android onto a phone from the year 2000


This post is meant as an addition to the video presenting the device:

The items used in this project are:

To remove the watch interface from the smartwatch, you need to install the following two apps:

  • Nova launcher
  • Fluid navigation gestures (to add touch “Back”, “Home”, “Recent apps”) softbuttons

Since I needed to mount the display rotated by 90°, I had to permanently rotate the display contents. This can easily be done by ADB commands as decribed here.

Is it usable?

Sort of. Basically, everything that the average Android user uses works. It is sometimes a bit uncomfortable due to the screen size but together with text to speech it is even doable to write longer messages and such. If your main need is to have a phone but only do browsing and other things occasionally, this is quite a nice option. It definetely is very good at a particular thing: Reducing you smartphone usage 🙂

Eachine Mustang P-51D aileron servo repair

The aileron stopped working on my Eachine P51d plane. This post describes how to solve this issue.


Update (08/06/2021): I found a much simpler fix for the issue. See at the bottom of this post.

This post is a follow-up to my previous post about this plane. In the last post I described the damages and repairs coming from a battery connected in reverse. After these fixes, the plane flew again good as new for a couple of months. Then, suddenly, it behaved very odd. It showed the following sympthoms:

  • The aileron control stick on the RC moved the rudder servo instead of the aileron servo
  • The rudder control stick on the RC was completely dead
  • Everything else worked as expected
  • The aileron servo moved left-right-center on power-up but otherwise was completely stationary

As odd as it looks, there is a simple explanation to this behavior. The receiver PCB I repaired in my fist post is actually used in different types of planes. The P-51D is a 4 channel type, but there are also some 3 channel type planes using this PCB. So for some reason, the receiver thinks it is mounted in a 3 channel plane. This post finds the cause of this and also shows an (extremely stupid) fix for this.

Analysis

As described above, the reason for the odd behavior is that the receiver thinks it is mounted in a 3 channel plane. So the obvious question is: Why?

The next obvious question is: How does the receiver actually know which type of plane it is? A jumper on the PCB or different firmware versions would be simple measures to differentiate between plane types. But the designers of the receiver took a more elegant way.

Rudder and elevator servos are integral parts of the receiver PCB. This makes sense since these are needed in both 3 and 4 channel planes. The aileron servo is connected via a plug to the receiver PCB. And this is how they differentiate: They simply detect whether an aileron servo is connected or not.

Alright, easy enough, this means that the receiver cannot detect the aileron servo anymore and so it switches to a 3 channel. But… the servo actually works fine and even moves at power-up! So the receiver sends movements to the aileron servo and the servo moves accordingly. What?? Why does the receiver then think that there is no servo connected? Maybe the receiver is to blame?

It turns out that the servo is the culprit. And I am not the only one having this issue: https://www.youtube.com/watch?v=KNiZMd9VDoY Apparently, the servo was damaged as well from the reverse battery incident but lived on for a couple of months. After that it failed as described above.

Unfortunately, I do not have a suitable measurement tool at hand to understand how exactly the receiver determines the presence of the servo motor. I tried some simple thinks like adding a pull-up or pull-down to the data line to fix the issue but that did not change anything. So it seems to be a bit more involved than that.

Solution

The obvious and easy solution to the issue is then to simply replace the aileron servo. But hey, you are on my blog so of course I solved the issue the stupid way… I built a Frankenstein servo.

I still had one of these cheap Turnigy servos laying around:

So what I did was I opened up the original aileron servo, removed the electronics and put in the electronics of the Turnigy servo. And this (to my own surprise) fixed the issue: The receiver recognizes the Frankenservo and it moves like the old one does.

Here you can see a picture of the monster:

If you are doing something similar and your servo spins endlessly after power-up you simply need to swap the polarity of the DC motor.

Summary

To summarize, my poor little Eachine P51-D flies again thanks to the Frankenservo. Just remember: Stupid causes (reverse battery) are best fixed with stupid solutions (Frankenservo) 🙂

Update (08/06/2021)

Even with the fixes described above, I still had occasionally the issue that the aileron servo did not work. Therefore, I researched a bit further how the actual detection of the presence of the aileron servo works. Remember that on powerup the aileron servon moves from left to right? This is part of the detection. Apparently, they measure the battery voltage and since a moving servo consumes more current, the battery voltage drops and they thus are able to detect the presence of the servo.

For whatever reason, this detection did not work reliable anymore. The fix is really really simple. Just increase the current draw of the aileron servo. I simply wired a 8 Ohm resistor in parallel to the internal DC motor connections of the aileron servo. This way the servo consumes more current when it moves and the flight controller is again able to detect the presence of the servo. This fix will also work without the Frankenservo modification (using the standard aileron servo).

USB-C PD for TS100 soldering iron, DPS5005 lab power supply & power drill

This post describes some simple mods that add USB-C PD to typical workshop devices


Until recently the typical round DC jacks were the de facto standard for supplying DC power to devices. USB replaced some of these connectors for low power applications but for everything above 1A you typically had to use DC jacks.

While DC jacks are simple and easy to use, they are also quite cumbersome. You have several different diameters of the jacks, even different diameters of the inner pins, different polarities and different voltages. This means that you typically have a bunch of power supplies and you have to check carefully before each use that you connect the right supply to your device. The chances of frying devices is not insignificant.

Luckily, some years ago USB PD was standardized. This seems to be the first time that you have access to an almost universal DC power standard that eliminates all the problems of the DC jacks. There is just one connector type, no risk of wrong polarity, the device requests the voltage it needs and almost all power sources are compatible with multiple voltages. So it offers you a true plug-and-play experience and allows you to reduce your stack of power supplies to just one.

Requesting a specific voltage from a USB PD supply

USB PD uses the USB control channel to allow a device to request a specific voltage. While this might sound complicated at first, it is really easy to use in practice. What makes it easy is that you can buy modules that do the communication for you:

https://i2.wp.com/www.alexwhittemore.com/wp-content/uploads/2020/05/BI07138A1.jpg?ssl=1
ZYPDS module

Above you see the ZYPDS module which I used. It is cheap and very small. You can use jumpers set it to 9V, 12V, 15V or 20V.

TS100 soldering iron

In my TS100 soldering iron I replaced the DC jack with a 20V ZYPDS module (props to https://hohlerde.org/rauch/blog/2019-10-27-TS100-USB-C/ ). At 3A this gives you a theoretical maximum of 60W, which is quite powerful for such a small iron. It heats up insanely fast and can solder even very heavy gauge wire:

TS100 with USB PD

DPS5005 lab power supply

In case of the DPS5005 I also used the same 20V ZYPDS module, allowing me to realize a power supply with almost 60W of power:

Luckily, the standard housing already has a hole for micro USB that can also be used for USB-C. I designed a simple 3D printable part that can be screwed into the housing and holds the ZYPDS nicely at the back of the unit:

The OpenSCAD source code for the part can be found here:

$fn = 64;


SCREW_D = 3;
SCREW_Z = 10;
SCREW_SPACING_X = 13;
SCREW_POS_Y = 4; //distance to outer wall

BLOCK_X = 20;
BLOCK_Y = 10;
BLOCK_Z = 15;


USB_X = 9.4;
USB_Z = 3.6;
USB_DEPTH_Y = 6.1;
USB_POS_Z = 9.4; //from bottom of housing to bottom of usb plug

PCB_X = 10.8;
PCB_Z = 2.4;
PCB_POS_Z = USB_POS_Z-1; //from bottom of housing to bottom of pcb

difference() {
translate([-BLOCK_X/2,0,0])cube([BLOCK_X, BLOCK_Y, BLOCK_Z]);

    //screws bottom
    translate([SCREW_SPACING_X/2,SCREW_POS_Y,-0.1])cylinder(d=SCREW_D, h=SCREW_Z+0.1);
    translate([-SCREW_SPACING_X/2,SCREW_POS_Y,-0.1])cylinder(d=SCREW_D, h=SCREW_Z+0.1);
    
    translate([-USB_X/2,-50+USB_DEPTH_Y,USB_POS_Z])cube([USB_X, 50, USB_Z]);
    
    translate([-PCB_X/2,-50/2,PCB_POS_Z])cube([PCB_X, 50, PCB_Z]);
    
    translate([0,0.5,0.5])rotate([45,0,0])translate([-50,-100,-50])cube([100,100,100]);
}

Power drill

Lastly, I converted my power drill to USB PD. It works great (see it in action in the video above) but the torque is of course not very high at 3A. But for my purposes it is enough.

Power source

Some final notes on the power source. You can of course use your typical USB C laptop chargers. As said above, it is nice to have just a single power supply for all your needs. But the real benefit of USB PD in my opinion is if you use it with a power bank. This way you can have your portable soldering iron, lab power supply, etc. all powered by the same power bank. No more messing around with thick mains cables. And no more frustration from separate batteries for each device that are either empty or damaged. You just take care of the single power bank and that’s it.

The power bank I am using is the RAVPower PD 60W (Amazon affiliate link):

RAVPower PD 60W Powerbank USB C Power Delivery 20000mAh: Amazon.de ...

Conclusion

Transforming my devices to USB PD makes my life easier. Less power supplies, less batteries, less cables I have to worry about. No doubt I will continue in the future to upgrade other devices to USB PD.

Eachine Mustang P-51D Electronics Repair

This post describes how to repair the electronics of a Eachine Mustang P-51D after reverse battery connection


Recently, I bought a 400mm Eachine plane for flying around in the park. At a weight of 50g it is a perfect park flyer with only very little risk of injuring someone. This is quite a nice change compared to my Wifibroadcast quadcopter. Here is a picture of the plane:

Eachine P-51D Mustang

Being so small, the plane is still extremely stable in the air thanks to its active stabilization system. In fact, it flies like a much larger plane in a very compact form factor. I really like it.

But what I seriously dislike are the battery connectors. It is quite hard to determine in which direction you need to connect them. What is even worse is that they actually can be connected in reverse! This is really a no-go for a consumer product. I found this out the hard way, my plane released the magic smoke and was dead from that point on.

Since I only had the plane for a couple of days I thought about turning it in for warranty. But meh, how boring is that? You just write stupid emails with customer support, wait and learn nothing in the process. So I opened it up and repaired it by myself. This is what this post is about. It is just a hint to everyone who was as stupid as me (and connected the battery in reverse) or who just wants to know what is inside before opening it.

Disassembly

Disassembly is rather easy. All foam parts are glued together with some kind of very flexible glue that lets you lift off the main wing a bit. Between the wing and the fuselage the glue then gets stringy and can be cut with an exacto knife. You just have to be careful and patient and after a minute or so the wings should be detached. Under the wings you find the main PCB with the elevator and rudder servo motors:

To remove the PCB you first need to unscrew the connectors of the servo motors. After the screws are removed, these can simply be pulled upwards. Next, the PCB has to be removed from the fuselage. Again, the same type of glue is used so you have to carefully bend the fuselage and cut the strings of the glue. Finally, you need to unsolder the motor wires (marked M+ and M-) and unplug the aileron servo connector.

Repair

The actual repair is quite simple. All that was blown was a 3.3V regulator. Luckily, it acted like a fuse and protected the microcontroller, sensors and radio that are supplied by it. You can see the the part and its connections in the following image:

All you need to do is to replace the regulator with another one. The electronics consume roughly 40mA so every regulator above 100mA should work with sufficient safety margin.

I found a slightly bigger regulator in my box of scrap PCBs and mounted it to the PCB:

The input of the new regulator is connected via a thin wire that could act as a fuse. This way I have some extra protection in case I am stupid enough to connect the battery in reverse again (which is quite likely… ).

After I added the new regulator I was very happy to see that all the other components survived and the plane is able to fly again.

To glue everything back together I recommend “Uhu POR” which is specifically made for flexible joints. To my eye the original glue of the plane is very similar to the Uhu product.

Conclusion

I can highly recommend the Eachine plane. It is a really fun toy and combines the good properties of a large stable plane and a compact easy to carry plane. I cannot recommend connecting the battery in reverse, obviously. But it is not the end of the world.

Lora localization in Jupyter

This post shows a localization visualization in Jupyter that uses LoraWan receptions of TheThingsNetwork to localize


Introduction

In my previous post I presented a cheap and simple LoraWan node:

LoraWan node

There are of course many things you could do with such a device: Home automation, weather stations, etc. In this post we will however take a closer look at a tracking application. In this type of application you typically want to know the location of “something” and communicate this information. Communication is in this context quite clear (Lora), but what about localization?

The first thing that comes in mind is of course GPS. The receivers are cheap, easy to integrate and deliver quite good global localization in the meter range accuracy. But… these devices typically only work if you are having a clear line of sight to the satellites. In most cases this means that you need to be outdoors to use them. What to do if your project requires indoor usage? Or underground usage, where you even won’t have any Wifi signals which you could use for localization?

This page presents an alternative to GPS that uses the receive signal strength (RSSI) of Lora receptions to estimate the position of the transmitter. The accuracy that one can expect from such a solution is of course orders of magnitude lower than GPS. But in theory this should also work indoors and (relatively) deep underground.

TheThingsNetwork data

Whenever a node sends a packet in TheThingsNetwork, you not only get the data of the node but also some very helpful metadata. Here you can see an example of the data you get:

OrderedDict([('app_id', 'befinitiv-testnode'),
             ('dev_id', 'testnode'),
             ('hardware_serial', '009B45E4337B4515'),
             ('port', 1),
             ('counter', 49),
             ('payload_raw', '9gEcAQ=='),
             ('metadata',
              OrderedDict([('time', '2020-04-01T07:06:54.901217967Z'),
                           ('frequency', 868.3),
                           ('modulation', 'LORA'),
                           ('data_rate', 'SF10BW125'),
                           ('airtime', 329728000),
                           ('coding_rate', '4/5'),
                           ('gateways',
                            [OrderedDict([('gtw_id', 'eui-fcc23dfffe0e316a'),
                                          ('timestamp', 721532820),
                                          ('time', ''),
                                          ('channel', 1),
                                          ('rssi', -103),
                                          ('snr', 2.5),
                                          ('rf_chain', 0)]),
                             OrderedDict([('gtw_id', 'eui-60c5a8fffe76636c'),
                                          ('timestamp', 2549255116),
                                          ('time', ''),
                                          ('channel', 6),
                                          ('rssi', -82),
                                          ('snr', 5.5),
                                          ('rf_chain', 0)])])]))])

As you can see above, TTN tells us that our node was seen by two gateways with the IDs “eui-fcc23dfffe0e316a” and “eui-60c5a8fffe76636c”. It also tells us the receive signal strength (RSSI) of -103 and -82dBi respectively. The only other ingredient you need is to know the location of the gateways to estimate the location of your node. Luckily, TTN lets you query this information:

https://www.thethingsnetwork.org/gateway-data/gateway/eui-fcc23dfffe0e316a

With all this info at hand, you should be able to guesstimate the location of your node. But there are still many unanswered questions: How many gateways typically receive data from your node, how stable and meaningful are the RSSI values, …? To answer these questions I developed a visualization of the data that will be presented in the next section.

Jupyter map visualization of TTN data

As said above, even though the theory of RSSI localization is simple, there are still many questions to answer. And the best way in this case to find answers is to visualize the data that we receive.

Therefore, I developed a Jupyter notebook that visualizes the data as follows:

Jupyter TTN data visualization

The image above shows gateway position with green markers, receptions from the node as red circles which are scaled by the RSSI value and the position of the node as a blue marker. The position of the node has been determined by GPS to have a reference to better understand the TTN measurements. The GPS track is also shown as a blue line on the map.

The notebook works in two modes:

  • Live
  • Replay

In live mode, the notebook connects to TTN and immediately displays receptions of your node on the map. These receptions can also be stored in a file for later analysis. If you load this file, you can use the replay mode to scroll through all the receptions of your node over time. To have some kind of reference measurement, you can also load a GPX file into the map. GPX position and TTN data are then associated via their timestamps. The following image shows the gui elements of the notebook:

GUI elements of notebook

Usage of the notebook

Thanks to the MyBinder service, you can use the notebook simply by clicking on the following link: https://mybinder.org/v2/gh/befinitiv/ttn_map_localization/master?filepath=ttn_map_localization.ipynb

(Note that MyBinder does not allow network connections, so live connection to TTN is not possible)

To run the notebook on your machine, all you need to do is:

git clone https://github.com/befinitiv/ttn_map_localization.git
pip3 install jupyter-repo2docker
jupyter-repo2docker -E ttn_map_localization

The repository contains an example TTN recording of my node of a drive through Berlin as well as the GPX file of that drive.

Analysis of the data

As said before, the purpose of the notebook is to get a feeling of the data. I will now share my impressions of the data and how suitable I find it for the purpose of localization. Please note that the data I am talking about is also included in the repository as the example files (.tickle and .gpx) so you as well can get first hand experience of the data.

Stability

The easiest way to assess stability is to look at a time where the node is not moving:

The picture above looks quite reasonable. All but one gateway received data and also the RSSI readings seem quite reasonable. However, without the node moving, just 20s later the data looks like this:

In the image above, only one gateway received a signal, and this time also a much stronger signal. Mmmh, stability-wise not great. And you find many other such examples in the data.

Completeness

The node was sending packets every 20s. The question of completeness is: How many of these packets were received by TTN? And the answer to this is unfortunately: Not that many. In fact, there are quite significant gaps in the data, in the order of several kilometers without any reception.

For example, from here:

to here:

we received nothing at all. The drive was done using a train that runs above ground. Granted, the tracks on this section are in a trench but still, no reception at all is quite disappointing.

Sensibility

The following image shows an example that makes me question whether I or TTN have a bug somewhere:

Questionable reception pattern

The problem with the image above: Why don’t the closest gateways to our node receive the data but instead gateways very far away? Especially the “Steglitz” gateway is _really_ far off. The distance between the node and gateway is 13km, crossing through the inner city of Berlin. This seems to me quite unlikely, especially in combination with the close-by gateways receiving nothing.

Another odd example is shown in the following image:

Suspiciously high RSSI

Above you see a gateway at 8km distance showing an RSSI comparable to the node being right next to the same gateway:

Conclusion

In summary, the data that I received from TTN seems a bit odd to me. Of course I could imagine explanations for all the odd cases I encountered but to my eye the data just does not look 100% right. This might be due to a bug hidden in my code or TTN or (and more likely) due to my lack of experience with Lora signal propagation in urban environments.

I never had the expectation of getting close to the 1m accuracy of GPS, not even 10m. My hopes were more in the range of a couple of 100m. But after looking at the data, I am more inclined to say that such a localization could just tell me “Berlin” or “not Berlin”, which is of course in many cases not really helpful.

I would highly appreciate if someone experienced with Lora signal propagation could shed some light into the oddities I found in the comment section.

LoraWAN node with Bluepill, RFM95 & Platform.io

This post describes a quick, cheap and easy build of a LoraWAN node that can be used for example in the TheThingsNetwork


Introduction

This post will show hardware and code to realize a simple TheThingsNetwork LoraWAN node. This has been done already several times by other people, however not with the ingredients I used (Bluepill, RFM95, Platform.io). I am posting this here because it might be of help to anyone trying to cook the same soup as I did.

This post is a first one in a series of posts in which I will then use the node introduced here. Please be patient for the follow-ups 🙂

Hardware

The hardware is very simple: A RFM95 LoRa module connected to a Bluepill microcontroller board. To do this, you just need to route a couple of wires:

BluepillRFM95
3.3VVCC
GNDGND
PA5SCK
PA6MISO
PA7MOSI
PA4NSS
PC14NRESET
PA1DIO0
Connections from Bluepill to RFM module

I also removed the resistor to the power LED (R1) on the BluePill to save ~2mA:

Removed power LED resistor R1

On one of my BluePill boards I also removed the 3.3V regulator but I could not notice any significant reduction in power (when backfeeding directly 3.3V into the BluePill). So I recommend to leave the LDO as is.

The final hardware is shown in the following picture:

LoraWAN node hardware

Software

The software I used on my node is based on an ATTiny node ( https://gitlab.com/iot-lab-org/ATtiny84_low_power_LoRa_node_OOP ). I removed all the sensor stuff so that it just sends a fixed dummy message. But the largest part was porting it to the STM32 based BluePill. For this, I removed all AVR specific parts and added some low power STM32 routines. The code now sends a message every 20s and sleeps in between (consuming ~0.3mA). The consumption is still a bit on the high side but for now it is ok for my upcoming projects.

You can find the software here: https://github.com/befinitiv/lorawan_bluepill_node

Building it with Platform.io should be self-explanatory. You might however need to adapt your programming hardware in the platform.ini file.

Before building you also need to copy the file secconfig_example.h to secconfig.h and fill it with the keys you get for example from https://thethingsnetwork.org/ for your node. I did not include my keys in the repository for obvious reasons 🙂

After you programmed your BluePill you should see a new message in your thethingsnetwork.org console every 20s (if you have a gateway in range).

Range test

I tested my node with a Lora spreading factor of 10, which is AFAIK the maximum supported by thethingsnetwork.org. With this I achieved some quite impressive results.

The first test I did was in the basement of my house. From here I could reach a gateway with quite a good signal strength of -91dB at a distance of 500m. What is impressive is that the line of sight crossed several blocks of 5 story buildings that are typical for the inner city of Berlin.

The next test was even more impressive. I took the node out on the street and monitored which gateways receive its data. And I was quite surprised to see that it reached close to 2km from ground level in the inner city of Berlin! One such example is shown in the following image:

In case you have never been to Berlin: For the most part, the city is extremely flat. This means that in the image above there was absolutely no clear line of sight. The signal passed through dozens of buildings and blocks. To be honest, I really have troubles imagining how the signal got through these obstacles. Really impressive.

Some final thoughts

Platform.io

This project was the first time that I used Platform.io to build my code. Previously I was going more bare-metal with Makefiles et al. My first impression was: Wow, this makes embedded development really simple. Defining the hardware platform in just a handful of lines in platform.ini file is enough, everything else happens under the hood. Download of the compiler, debug tools, libraries, etc. This was a nice surprise.

Still, when I let the graybeard in me speak I am troubled by all the dependencies Platform.io creates to a number of diverse parties. This is all good if everything is working but my fear is that in 5 years from now many of these dependencies will be broken and the code will not be buildable anymore. It is a bit like an illness of modern times, comparable to for example NPM. These systems let you start smoothly but as soon as something breaks you notice how little control you actually have.

TheThingsNetwork

This project is also the first time I am using TheThingsNetwork. And I must say that I am quite pleased with the service they are offering. So far, the RF technologies in use by me are Wifi & Bluetooth LE. Both of these work well in cases where you spawn your own infrastructure (Access point for example) but are pretty much useless where this is not possible. The only alternative I knew so far was cellular networks like 4G. But their complexity and subscription model was always keeping me from using it. TheThingsNetwork adds a new tool to my RF toolbox that fills a gap: It works on a large area without building up my own infrastructure, it uses “low” frequency bands, so the penetration capability and range, especially in urban environment, is fantastic. On top of that, the service is free, the hardware is cheap and the whole setup is relatively simple. What more could you ask for? 🙂

Move to GitHub

This is just a short info post to inform you that I moved all my repositories to GitHub:

https://github.com/befinitiv/

I was forced to do so thanks to BitBucket because they will delete Mercurial repositories this year. And I must say: Great job BitBucket! They just announced the deletion and said “you’ll have to migrate to git by yourself”, without offering any assistance. So I had to manually migrate all my repositories. And well, they showed me the finger and so did I by moving completely over to GitHub and abandoning BitBucket. Please don’t understand this post as an advertisement for GitHub. It is just an “un-advertisement” for BitBucket.

This change means of course that after deletion of the repositories the links to the repos in my posts will be broken. If you stumble upon such a situation please use the new GitHub repositories instead.

36C3 Wifibroadcast Talk

I held a talk about Wifibroadcast at the 36C3 Chaos Communication Congress in Leipzig this year. If you are interested you can watch the recording of my talk here: https://media.ccc.de/v/36c3-10630-wifibroadcast

FPV in 4K

This post describes how to achieve 4K resolution with a wifibroadcast-based FPV system using H265 encoding. The improved encoding can also be used to get better image quality or reduce bitrate at lower resolutions.


Note: This writeup will not give you a complete image that you can burn on your SD card. It shows a way that makes 4K FPV possible. And due to the image sensor used, it is not traditional 16:9 but instead 4:3 format.

Most of the readers of my blog will know the wifibroadcast system that I developed four years ago. Unfortunately, due to lack of time I was not able to continue working on wifibroadcast anymore. But the community came to my rescue: Many many others have taken over the idea and created incredible images based on the original wifibroadcast. They extended it with so much functionality, broader hardware support, etc. I was really happy to see what they made out of the project.

So as a small disclaimer: This post does not mean that I will pick back up development on the wifibroadcast project. It is just a small hint to the community about new potentials for the wifibroadcast project. These potentials follow of course the wifibroadcast philosophy: To use cheap readily available parts and to make something cool out of them 🙂

The choice of single board computer for wifibroadcast

Why did I choose Raspberry PI for wifibroadcast? Obviously, due to it’s low price and popularity? Wrong. This was just an extra bonus in the Wifibroadcast soup. The main motivation was that it had a well supported video encoder on board. Video encoding is one of the key ingredients that is required for wifibroadcast. It is not important whether you use MIPS or ARM architecture, USB or Raspberry camera, Board A or Board B… the only things you really need is a video encoder and a USB 2.0 interface.

And in terms of video encoding, the stakes are rather high in case of Raspberry PI. The encoder is so well integrated, it just works out of the box. It has gstreamer support if you want to build your own video pipeline and if not, the camera app directly delivers compressed video data. It cannot get any easier.

Since the original wifibroadcast I was always on the lookout for another SBC with a well supported video encoder. Always with the hope in mind to lower the latency. Unfortunately, my latest find does not seem to deliver improvements in that area. Instead, it improves in terms of image resolution and coding efficiency by using H265 instead of H264 (Raspberry PI does not support H265 and also cannot go beyond 1920×1080 resolution).

NVIDIA Jetson Nano

Some weeks ago, NVIDIA announced the Jetson Nano, a board targeted towards makers with a rather low price tag of $99. The following features caught my attention:

  • Raspberry PI camera connector
  • Well supported h264 and h265 support (through gstreamer)

I could not resist and bought one of these boards. Spoiler: It does not seem to improve in terms of latency but definitely in terms of resolution and bit-rate.

Other nice features of this board is the improved processor (A57 quadcore compared to RPI A52), improved RAM (4GB DDR4 vs 1GB DDR2), improved GPU (128 core vs 4?).

Camera modes

The Jetson Nano supports only the Raspberry camera V2. With this, it provides the following (for Wifibroadcast most useful) camera modes

  • 4K (3280 x 2464) at 21fps
  • FHD (1920 x 1080) at 30fps
  • HD (1280 x 720) at 120fps

In theory, it should also support other frame rates but I could not manage to change this setting. But I did not really try hard so it is probably my fault.

Latency measurements

I did some very quick latency measurements. The setup that I used was my Laptop with gstreamer and a wired Ethernet connection between Laptop and Jetson. The reason for a wired connection instead of Wifibroadcast is that my flat is heavily polluted on all wifi bands. If I had used Wifibroadcast transmission, the latency from the encoding and the transmission in the polluted environment would have mixed together. Using an Ethernet connection was a simple way of separating the encoding latency since I was only interested in that.

The measurement setup was rather crude, I used the TestUFO page and took a picture of the setup to measure the time distance like shown here (right: “live view”, left: “transmitted view”):

This setup introduced of course a couple of variables. Screen update latency and decoding latency of my laptop, update speed of my smartphone displaying the TestUFO page. Not quite ideal… but good enough for a first impression.

With this setup, I determined the following latency (only one measurement point per setting):

H265:

  • 4K 21fps: 210ms
  • FHD 30fps: 150ms
  • HD 120fps: 140ms

H264:

  • 4K 21fps: 170ms
  • FHD 30fps: 160ms
  • HD 120fps: 160ms

Interpretation of latencies

One thing to note is that the CPU of my laptop was smoking quite a bit while decoding H265 or the high frame rates. So in case of 4K H265 I would definitively expect improvements if you would use a proper decoding device (like a smartphone or even a second “decoding Jetson”).

Otherwise, I would say that the latencies are definitively usable for FPV scenarios. I think it would be worth to invest more work into the Jetson Nano for Wifibroadcast.

Command lines

Receiver: gst-launch-1.0 -v tcpserversrc host=192.168.2.1 port=5000 ! h265parse ! avdec_h265 ! xvimagesink sync=false

Transmitter: gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM),width=3280, height=2464, framerate=21/1, format=NV12’ ! omxh265enc bitrate=8000000 iframeinterval=40 ! video/x-h265,stream-format=byte-stream ! tcpclientsink host=192.168.2.1 port=5000

Some remarks: To use H264, all occurrences of H265 need to be replaced in the command lines. To change the resolution, the parameter of the TX command needs to be adapted. Lowering resolution results in a cropped image, meaning a smaller field of view.

Quality comparison

One important question is of course: What do you gain from 4K+H265? I extracted some stills out of the video streams to compare things:

The difference in quality gets quite clear if you switch back and forth between h264 and h265 (note: only 256 colors due to gif format):

The bad results from H264 are expected: Even at high bitrates the format is not really intended for 4K resolution. And Wifibroadcast usually runs at even lower than usual bitrates (all recordings in this post use Wifibroadcast-typical 8mbit/s)

Sample recordings

Here you can find some raw sample recordings. They have been created with the following command line (resolution and codec changed accordingly):

gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM),width=3280, height=2464, framerate=21/1, format=NV12’ ! omxh265enc bitrate=8000000 iframeinterval=40 ! video/x-h265,stream-format=byte-stream ! filesink location=/tmp/4k.h265

https://www.file-upload.net/download-13588131/4k_h264_static.mp4.html
https://www.file-upload.net/download-13588129/fhd_h264_static.mp4.html
https://www.file-upload.net/download-13588130/hd_h264_static.mp4.htm

https://www.file-upload.net/download-13588134/hd_h265_static.mp4.html
https://www.file-upload.net/download-13588133/fhd_h265_static.mp4.html
https://www.file-upload.net/download-13588135/4k_h265_static.mp4.html

https://www.file-upload.net/download-13588139/4k_h265_moving.mp4.html
https://www.file-upload.net/download-13588141/4k_h264_moving.mp4.html

(Please excuse the use of such shady file hosting site… )

Summary

In summary I can say: Yes, the Jetson Nano is a useful SBC for a Wifibroadcast transmitter. It also is very likely a good candidate for a Wifibroadcast receiver. Plenty of processor power plus a (compared to Raspberry) capable GPU to draw even the fanciest OSD.

The weight and size might be a bit too high for certain applications. But there are options here as well. The heat sink (being the most heavy part of the system) could easily be replaced with something lighter (since you have lots of airflow on a drone). Also, the Nano is in fact a System-on-Module. Meaning: The whole system is contained on a small PCB the size of a Raspberry PI Nano. The mainboard of the Nano is mostly just a breakout board for connectors. A custom board or even a soldering artwork of wires might turn the Nano into a very compact, very powerful system.

Also, the 4K resolution + H265 seems to improve Wifibroadcast quality quite a bit. Together with a suitable display device (high resolution smart phone for example) this has the potential to boost the video quality to a completely new level.

I really hope that the hint from this post will be picked up by somebody from the community. My gut feeling says that there is definitively potential in this platform and that my crude latency measurements of this post can be improved quite a bit (by using a different receiving unit and/or parameter tuning).

If someone has something that he would like to share, please add a link to the comments. I will then happily integrate this link into this post.