Skip to content

Wifibroadcast RPI FPV images v0.2

Just a quick note: I released new images for Wifibroadcast RPI FPV. You’ll find them here:

The changes are:

  • Init-scripts have been replaced by systemd services. For example, the TX service can now be stopped like this:
    sudo systemctl stop wbctxd
  • New wifibroadcast version: This one supports rx status information
  • New Frsky-OSD: The OSD is now enabled by default showing the signal strength of the receiving cards. If you prefer a plain camera image you can disable the OSD:
    sudo systemctl disable osd
  • Improved Raspberry 2 support for RX

Prebuilt Wifibroadcast Raspberry PI FPV images

This post gives an update to Wifibroadcast: Prebuilt images.

Since the beginning of Wifibroadcast the only method to try it was to manually install and compile the software. To make it easier for people to try out the system I now created prebuilt images. They can be found here:

To use them you just have to install the images onto SD cards, prepare two Raspberry PIs with camera+TPLINK TL-WN722 as TX and display+TPLINK TL-WN722 as RX and you are done.

I moved the manual installation procedure from the main Wifibroadcast page to here in case you want to install Wifibroadcast onto an existing Raspberry PI image. Making things by yourself is also a good way to get to know the system better.

The images contain the basic features you would expect. Video capture at TX and video display at RX. Also, automatic video recording onto USB sticks and support for a shutdown button is included. FrSky-OSD software is also installed but disabled by default (since it depends a lot on the hardware available).

Automatic image creation

Since creating these images is quite time consuming (and I am lazy…) I automated the whole process. This also helps me and others to understand afterwards exactly what an image contains. And of course, others can put their tweaks into the build system and benefit as well from all the points above.

The following commands are all you need to do to create the TX and RX images:

hg clone
cd rpi_wifibroadcast_image_builder

The automatically downloads the needed bits such as basic raspbian images, build tools and kernel. The kernel will be patched, compiled and installed onto the base image. This is followed by chrooting with the help of qemu (because Raspberry PI is an ARM architecture) into the image and (natively) install Wifibroadcast and co. Oh and all the configurations like network card settings, enabling of the camera and HDMI mode are also automatically set.

Future plans

Currently, the image only supports 2.4GHz operation. I would like to extend the images to also support 5GHz Wifi sticks and choose the frequency automatically, depending on which sticks are connected. Unfortunately, I do not have compatible 5GHz Wifi sticks available so it is still unclear if and when this will happen.

Latency analysis of the raspberry camera

This post presents results of an analysis looking for the cause of latency in a Raspberry wifibroadcast FPV setup

The last bit a wifibroadcast FPV system would need to kill analog FPV would be better latency. The robustness of the transmission using wifibroadcast is already better, the image quality for sure, just the latency is a bit higher.
This post will not present a solution and will also not measure total system delay. It concentrates on the latencies in the TX PI.

Test setup

For measuring latency easily you need something to measure that is driven by the same clock as the measurement. I decided to go with a simple LED that is driven by a GPIO of the Raspberry. The LED is observed by the camera. This setup can be seen in the following Figure:


I wrote a program that toggles the LED and outputs for each action a timestamp delivered by gettimeofday(). This allows me know the time of the LEDs actions relative to the PIs clock.

Capture-to-h264 latency

The latency between the image capture to a h264 image comprises capture and compress latency. Since this both happens hidden by raspivid, it cannot be divided easily. And this is the number we more or less “have to live with” until Broadcom opens up the GPU drivers.

I measured the latency using a modified version of raspivid. The compression engine packs the h264 data into NAL units. Think of them as h264 images that are prefixed by a header (0x00000001) and then written image after image to form the video stream. The modification I added to raspivid was that each NAL unit received a timestamp upon arrival. This timestamp was written right before the NAL header into the h264 stream.

I also wrote a program that was able to read my “timestamped” stream and convert it into single images that are attached with their corresponding timestamps.

For example, the LED toggling program gave me an output like this:

OFF	1441817299	404717
ON	1441817299	908483
OFF	1441817300	9102
ON	1441817300	509716
OFF	1441817300	610361
ON	1441817301	111039
OFF	1441817301	211695
ON	1441817301	717073
OFF	1441817301	817717
ON	1441817302	318342
OFF	1441817302	419034
ON	1441817302	919647
OFF	1441817303	20302
ON	1441817303	520965
OFF	1441817303	621692
ON	1441817304	122382
OFF	1441817304	223078
ON	1441817311	718719
OFF	1441817311	819685
ON	1441817312	320652
OFF	1441817312	421654

First column is the status of the LED, second column the seconds, third column the microseconds.

My h264 decoder then gave me something like this:

1441741267	946995	CNT: 0	Found nalu of size 950 (still 130063 bytes in buf)
1441741267	965983	CNT: 1	Found nalu of size 907 (still 129148 bytes in buf)
1441741267	983183	CNT: 2	Found nalu of size 1124 (still 128016 bytes in buf)
1441741268	3971	CNT: 3	Found nalu of size 1409 (still 126599 bytes in buf)
1441741268	27980	CNT: 4	Found nalu of size 3028 (still 123563 bytes in buf)
1441741268	51698	CNT: 5	Found nalu of size 7005 (still 116550 bytes in buf)
1441741268	68547	CNT: 6	Found nalu of size 9667 (still 106875 bytes in buf)
1441741268	89147	CNT: 7	Found nalu of size 10312 (still 96555 bytes in buf)
1441741268	109650	CNT: 8	Found nalu of size 19244 (still 77303 bytes in buf)
1441741268	138233	CNT: 9	Found nalu of size 19338 (still 57957 bytes in buf)
1441741268	160402	CNT: 10	Found nalu of size 31165 (still 26784 bytes in buf)
1441741268	172178	CNT: 11	Found nalu of size 19899 (still 6877 bytes in buf)
1441741268	195332	CNT: 12	Found nalu of size 25129 (still 105935 bytes in buf)
1441741268	213109	CNT: 13	Found nalu of size 24777 (still 81150 bytes in buf)
1441741268	236775	CNT: 14	Found nalu of size 24657 (still 56485 bytes in buf)
1441741268	259814	CNT: 15	Found nalu of size 24738 (still 31739 bytes in buf)
1441741268	274674	CNT: 16	Found nalu of size 24783 (still 6948 bytes in buf)
1441741268	300793	CNT: 17	Found nalu of size 24855 (still 106209 bytes in buf)
1441741268	314963	CNT: 18	Found nalu of size 18368 (still 87833 bytes in buf)
1441741268	339084	CNT: 19	Found nalu of size 17959 (still 69866 bytes in buf)
1441741268	365756	CNT: 20	Found nalu of size 17958 (still 51900 bytes in buf)

Where the first column represents the seconds and the second column represents the microseconds.

Since the LED was running at 2Hz and the camera at 48Hz I could directly relate the LED event to a specific video frame just by looking at the images (-> is the LED on or off?). This gave me two timestaps, the first of the LED event and the second of the capture of it.

The delay I got out of these was always in the range between 55ms and 75ms. The variation of 20ms makes sense since this is roughly our frame time. Depending on whether I captured the LED at the beginning (longer delay) or at the end of the exposure (shorter delay) the times vary.

Camera settings were: 48FPS, -g 24, -b 6000000, -profile high

Capture latency

I was wondering: Does the 55ms minimum latency come mostly from compression or from capturing? I looked through ways to capture directly and found some nice hack here:
Here I did the same trick with the LED and was even a bit more lucky to capture this shot:


Notice that the LED is half-on half-off? This is due to the rolling shutter effect of the sensor. By accident I captured the LED right “in the middle” where it turned off. So the capture time of this shot is known to be exactly when the LED switched state (so here we do not have the 55-75ms jitter as in the case above). The delay between event and capture in this case was 38ms. Unfortunately, the camera runs in this hacky mode at 5Mpixel. So this is not exactly the 720p FPV scenario.

Compression latency

To validate my findings from above I also timestamped the hello_encode program included in Raspian. This gave me a compression latency of 10ms for a single 720p frame.


Although my different measurements are not completely comparable to each other I now have a rather clear view on the latencies:

Capture: 40ms
Compression: 10ms
FEC-Encoding+Transmission+Reception+FEC-Decoding+Display: Remaining ~50-100ms (to be confirmed)

One thing I did notice on my experiments with raspivid: It uses fwrite to output the h264 stream. Since this is usually buffered I noticed sometimes about 6KiB being stuck in the buffering. Now that size if far away from a whole frame size so it won’t cause one frame being stuck inside the pipeline. But nonetheless, it will probably cause a small delay.

Another thing I noticed is that the NALU header (0x00000001) is written by raspivid at the beginning of each new h264 frame. Since the decoder needs to wait until the next header to know the end of the frame, a latency of one frame is created (unnecessarily).
Maybe a variant of wifibroadcast that is directly integrated into raspivid would make sense. This could treat whole h264 frames as an atomic block, divide the frame into packages and transmit them. Right now the blocking is done at fixed intervals. This leads to the problem that additional data is stuck in the pipeline due to a partly filled block. I still need some time to think it over but there could be some potential here.

Enable 2.4GHz RC systems to work with 2.4GHz video systems

This post shows how to modify most 2.4GHz RC systems so that they won’t interfere anymore with 2.4GHz video transmission systems.


Ever since the 90s I used my good old Graupner MC15 35MHz system. This worked always well, also with my first quadcopter “Mikrokopter”. But when I changed to my Naze32 system the RC started to become unreliable. This has led to at least three crashes which was quite annoying. Even with lots of ferrites and other filtering elements I could not get the system to work reliable anymore.

So it was time to look for alternatives. But… my video transmission based on wifibroadcast used 2.4Ghz technique. It is well known that 2.4GHz RC systems interfere with WIFI and and analog transmission.

One idea to overcome this issue was to trim the frequency of the TL-WN722N card below 2.4GHz. Here are the required changes to the kernel described. However, this approach has several disadvantages. First, the WIFI cards most likely do not have any calibration data if you leave the official channels. I tested my cards in my microwave oven and noticed a much higher packet loss rate. In fact is was so high that a wifibroadcast video transmission would not be possible. The second disadvantage is that this is illegal to use outside of a shielded cage. In summary: Not an option.

The second possibility would be to buy a 2.4GHz RC system and switch to 5GHz wifi equipment for wifibroadcast. The problem here is that I did not want to buy new wifi sticks. I developed everything for the TL-WN722N and I really like them. In the past I had to test a bunch of other cards until I found the TL-WN722N… Plus since they use 2.4GHz they offer a more robust transmission in case of obstacles. So again, not an option for me.

Mh, and now? There are DIY FrSky modules available. Refer to this nice project. This could have been used with an altered frequency scheme so that not the whole 2.4GHz band is used. But… RC is the most critical part of a quadcopter. And I am not really in the mood to do experiments on this part. It’s quite some work to get the RX and TX modules to run and then I don’t know anything about the reliability of the code or the range and quality of the TX modules… The answer is the same: Not an option.

But I liked the idea of using a 2.4GHz RC system that does not use the full spectrum. I looked around and found no systems that lets the user configure the spectrum. But what if we could modify a bought system in that respect? This is what this post is about.

How most 2.4GHz RC system are working

Most of the RC systems (Hott, Frsky, Hitec, …) use the TI CC2500 chip for the RF communication. The chip is used in a hopping scheme so that a narrow-band channel changes its frequency over the entire 2.4GHz spectrum at a high frequency (100 times per second in case of Hott). This is the trick why these RCs are so robust. Even if there are disturbances on some channels the chances are high that still packets on other channels come through.

The CC2500 is connected to a microcontroller via SPI and transmits or receives data under control of the microcontroller. This is the case for the transmitter as well as for the receiver.
If you take a look at the CC2500s data sheet then you’ll find that this chip has a channel register. This is what’s used to implement the frequency hopping. Both microcontrollers on TX and RX write the same channel sequence into this register so they hop “together” through the spectrum.
So all there is to do is to change the sequence they use and limit its values.

Ways to modify the channel register data

The most obvious way would be to change the software on the microcontroller in the RX and TX so that only a portion of channels is used. Unfortunately, firmware updates (in case of Hott) are encrypted. So one would need to find the encryption algorithm, possibly the key and need to reverse engineer the firmware. Things get even worse because you would have to do this twice, both for RX and TX. And again for every new firmware version. And again for other models. And again for other manufacturers. What a nightmare.

Another approach would be to alter the hardware. One would need to build a device that monitors the SPI communication between the microcontroller and CC2500 and modify the writes to the channel register. While this is a bit less elegant that a software-only solution it provides one major advantage: It works for all CC2500 based systems. Regardless of the firmware version, model, manufacturer… it always works. That’s why I’ve chosen to go down that road.

SPI injector

As said above, the SPI injector should watch for SPI writes to the channel register and overwrite its value. I’ve choosen to always set the MSB of the channel value to zero. This way the spectrum is always in the lower half of 2.4GHz.

I’ve chosen a CPLD to do the work. The reason is that it would be quite tough to get the timing right with just a microcontroller. The SPI bus runs at 4MHz so a microcontroller that just runs two or three times faster might not be able to change just a single bit at the right time.

This schematic explains nicely how the CPLD integrated into the system:


Here you can see that the CPLD listens on the SPI lines. However, the original connection of the MOSI signal has been cutted and is feeded through the CPLD. Whenever the channel register is written, the CPLD zeros the MOSI signal at the right time.

This is also visible in the waveforms:

The red arrow shows the position where the CPLD has overwritten one bit (DIN is original data, DOUT is modified data).

I modified a Graupner GR-12 receiver. You can see the connections on the PIC microcontroller here:


Here is a picture of the modified receiver:


If you want to apply this hack to a different receiver, you just need to follow the traces from the CC2500. Of course, you need to patch both the receiver and the transmitter (so that both agree on the channels they choose).

Checking if things work well

I needed a way to check if my changes did work and affected the spectrum. Luckily my TX (Graupner MZ-12) provides a view on which channels it received data.
Here you see the default (unmodified system) output where all channels are more or less available:


When I integrated the SPI injector only on the receiver side, half of the channels (the upper channels) should be deactivated. The reason for this that now the TX and RX disagree on the upper half of the channels. TX still uses the original hopping scheme while RX uses the modified scheme. The TX spectrum agrees :)


After both devices have been modified the spectrum looks ok again (because now both devices agree on the hopping scheme):


Notice the symmetry? This is because the upper half of the channels is now remapped by the CPLD to the lower half. So it is to be expected that the spectrum diagram tells more or less the same for both halves.

How to rebuild

First, you need to clone the repository:

hg clone

In the repository you find the sources as well as a testbench for simulation, an UCF file for defining the pinout and prebuilt images.

Second, you need to buy two Xilinx XC9536XL CPLDs. These costs only around 2€ pp.

As a next step you need to flash a XC9536XL CPLD from Xilinx with the prebuilt files. There are many ways to do so. You could use the official Xilinx JTAG cable, a parallel port adapter, a bus pirate and many more. You find lots of information in this Hackaday howto.

The last step is to find out the traces on your TX and RX, cut the MOSI trace and wire up the CPLD. The pinout is described in the spi_mod.ucf file.


In this post I presented a cheap way to modify the spectrum of a RC system. One advantage is that only the channel scheme is modified. Since all other aspects of the RC system are untouched, you still have the good range and high reliability of the original system.
Most importantly, this modification works on all RC systems that use the CC2500 chip, irregardless of brand, model or firmware version.

Forward error correction for wifibroadcast

This post introduces a new feature to wifibroadcast: Forward error correction. This helps lowering the data rate and/or increasing the reliability of the transmission.


Early after I published wifibroadcast there were suggestions on improving the forward error correction (FEC) mechanism. While I agreed on these suggestions, my primary goal was to first create something that is usable to see if the idea behind wifibroadcast is worth following. After some months of testing and dozens of other people having also great experiences with wifibroadcast I was confident enough to start to “improve” things.

Initially I implemented a simple retransmission. Each packet got sent several times, increasing the chance that at least one packet got through. This worked sufficiently well but had some issues:

  • Bandwidth: The only parameter to increase the reliability was the retransmission rate. And this rate basically multiplied the net bandwidth. This is quite obvious: If you transmit each packet three times, you need three times the bandwidth.
  • Relieability: Although it seems unlikely that all N retransmissions of the same packet are lost that phenomenon existed. The explanation is quite simple: If you roll a dice often enough, the likelihood of some strange combinations happening needs to be considered. For example, if you roll a million times, you could expect to see the sequence “1”-“2”-“3”-“4” nearly a thousandth times. The same is true for wifibroadcast. We are sending a lot of packets so unlikely things can easily happen.

FEC method

Some commentators suggested to take a look at udpcast, a tool that is able to transfer data over a lossy unidirectional link. And well, that is exactly what wifibroadcast does! The FEC code there was well written and easy to adopt for my use case. In addition, the code is fast enough to run on a Raspberry PI A+. In the code Vandermonde matrices are used to perform the FEC.
The idea behind the code is very simple. Your data has to be divided into packets and a fixed number of them are grouped together to form a block. Lets assume we have a block size of 8 packets. The FEC code now allows you to calculate an arbitrary number of FEC packets (that have the same size as the DATA packets). Lets assume we use 4 FEC packets. If now a DATA packet gets lost, it can be replaced by any of the 4 FEC packets. So we might be able to repair up to 4 DATA packets, regardless which ones of the 8 are affected. This leads to two important advantages:

  • Bandwidth: Since we used in the example above 4 FEC packets per 8 DATA packets, the bandwidth requirements are +50%. This could not have been archived with retransmission. There the “smallest” bandwidth with redundancy would be +100%
  • Reliability: Since any DATA packet can be repair with any FEC packet, the chances of corruptions drop significantly. It is no more important which packets are corrupted, only how many.


The new FEC feature has been merged into the default branch of wifibroadcast (together with the new fifo interface introduced here)
To update (both on tx and rx!):

hg pull
hg update
make clean && make

The basic usage of wifibroadcast has not changed. Only some parameters have now a different meaning:

  • -b This parameter describes the number of DATA packets per block. So this parameter has more or less the same meaning as with the retransmission version
  • -r This parameter has changed from number of retransmissions to number of FEC packets. Each block will be appended with -r FEC packets.
  • -f Packet size. This parameter is identical to the retransmission version in that it defines the size of a packet. The only difference is that packets now cannot be smaller than -f.
  • It is also important to note that both tx and rx need to agree on -b, -r and -f. Otherwise the transmission does not work anymore.

    How to set the parameters

    The values of the parameters depend on your application. There are two things to consider: The coding rate and the block length. The coding rate is defined by the ratio between DATA and FEC packets. The block length is simply given by the -b parameter.

    • Changing the coding rate while keeping block size constant: Increasing the number of FEC packets increases the reliability of the link at the expense of higher bandwidth. A FEC ratio of 50% is roughly(!) comparable (in terms of reliability) to a retransmission rate of 2. If your block size is 8 DATA packets then this would result in 4 FEC packets.
    • Changing the block lengths while keeping the coding rate constant: An increased block length can increase the reliability of a link without increasing the bandwidth. However, this increases the latency since it needs to be waited until a full block of data has been received before the transmission can begin. Why does this increase reliability? Very often, blocks are only mildly corrupted so that not all FEC packets will be needed. However, these unused FEC packets are useless for the following block. By increasing the block size you are also increasing the “range” where the FECs can be applied.

    Since with this new FEC feature the bandwidth requirements of wifibroadcast can be lowered quite significantly, new WIFI modulations are now usable. I’ve added a new patched firmware under patches/AR9271/firmware/htc_9271.fw.mcs1 that if being copied to /lib/firmware/htc_9271.fw (on tx) enables the MCS1 modulation. This gives you 13mbit/s air data rate of which roughly net 10mbit/s are usable. So this modulation would be perfect to use if you have a h264 data rate of 6mbit/s with a 8/4 coding rate (resulting in a 9mbit/s bandwidth requirement).

    The lower modulation rate should give you a better range if you are flying at high distance or with massive occlusions.

Telemetry OSD for wifibroadcast

This post shows how to add an overlay with telemetry information onto a video stream

Since the last flyaway of my quad having a telemetry link that transmits GPS coordinates was high on my wish-list. Wifibroadcast was prepared long time ago to transfer a second data stream in parallel to the video data by defining ports. But until now I did not make use of it. You can see a video of the OSD in action here:

This post describes how to set up the OSD display. Additionally, two changes required to wifibroadcast are described before the actual OSD stuff.


My quad uses a Naze32 flight controller that is able to provide FrSky telemetry data over a serial link. The serial port of the Naze32 is connected to a USB2SERIAL converter that is plugged into the tx Raspberry. The telemetry data is then transmitted using wifibroadcast over a second port in parallel to the video. On the receiving Raspberry the telemetry data is received, parsed and drawn onto the screen.

Wifibroadcast minimum packet length

Telemetry data is very different to video data in that it consists of very small packets. For example, most of the FrSky packets are only 5 bytes long. Using the default settings of tx this would lead to a single packet being sent for each small telemetry data unit. This of cause would create a high overhead since the ratio of payload to required header data is very low. To avoid this issue I added a command line parameter -m that can be used to define a minimum payload length. If given, tx waits until it has captured the minimum number of bytes and only then sends a packet.

Several tx instances in parallel

As said above wifibroadcast had the “port” feature for quite some time. The idea was to start several instances of tx, each with a different -p parameter. I used this method for my first telemetry experiments but noticed something odd: Whenever I started a second tx instance in parallel to the video tx process, the video tx was influenced. This was even the case when the second tx did not send any data at all. Quite strange. The result of this was that the video transmission was less fluid and stuttering a bit. Quite a high price for just a bit of telemetry data…

I basically had two options: Trace the cause of this issue in the kernel or adapt tx so that a single instance is able to send several data streams. I went for the latter one because I think it required less effort.
The route I went for was that tx got a new parameter -s that describes the number of parallel streams that shall be transmitted. If this parameter is omitted then tx behaves just like before, transmitting a single stream that is read from standard input. If however two or more streams are requested, tx changes the way it works. It then creates named FIFOs under /tmp. For example, if -s 2 is given, tx creates two named FIFOs called /tmp/fifo0 and /tmp/fifo1. Everything that gets written into fifo0 will be transported over the first port and everything written into fifo1 gets transported over the second port. Quite simple. The actual port number can be influenced by the -p parameter. It serves as an offset. Assuming that -p 100 is given, then fifo0 sends on port 100 and fifo1 on port 101.

The next section will show a usage example of this new mode that should make things clearer.

This feature is still not in the master branch since I had not yet the time to test it well. But after my first successful flight I will merge it into the master.

Putting things together

These commands assume that you have already installed wifibroadcast (as described here).

Transmitting side

First, change to the “tx_fifo_input” branch in wifibroadcast:

cd wifibroadcast
hg pull
hg update tx_fifo_input

Next, start a tx instance in the background with a retransmission block size of 4, a minimum packet size of 64 bytes and two parallel streams:

sudo ./tx -b 4 -m 64 -s 2&

All that is needed is to connect the data sources to the named FIFOs:

sudo su
stty -F /dev/ttyUSB0 -imaxbel -opost -isig -icanon -echo -echoe -ixoff -ixon 9600 #set up serial port

raspivid -ih -t 0 -w 1280 -h 720 -fps 30 -b 4000000 -n -g 60 -pf high -o - > /tmp/fifo0 & #video
cat /dev/ttyUSB0 > /tmp/fifo1 & #telemetry

Now video and telemetry data is transmitted in parallel.

Receiving side

The video reception can be left unchanged (refer to here) and should work out of the box.

To get the OSD working, we first need to check out and build my OSD project:

hg clone
cd frsky_omx_osd

And finally you need to pipe the telemetry data into the OSD viewer (note the -p 1 for the telemetry port):

cd wifibroadcast
sudo ./rx -p 1 -b 4 wlan0 | /home/pi/frsky_omx_osd/frsky_omx_osd /home/pi/frsky_omx_osd

Of cause, you can also save the telemetry data using the “tee” command.

In case you just want to test the OSD without the whole wifibroadcast stuff there is a FrSky telemetry log included in the repository:

./frsky_omx_osd . < testlog.frsky


The OSD works well for me. But be warned: Things are still a bit hacky around here. I wouldn’t guarantee that my FrSky protocol interpretations are always right. No wonder seeing how that protocol is designed. It is a perfect example of what happens if companies outsource their coding tasks to mental institutions ;)
It should be easy though to replace the frsky.c parser with a different one (also for a different protocol). Volunteers are welcome :)

Diversity for wifibroadcast

This post describes a new feature of wifibroadcast: Software diversity.

Update 06/02/1025: The new diversity code from 06/01 worked fine on my desktop pc but showed some rare problems on a Raspberry A+. The TX_RESTART was triggered sometimes by out of order packets (with a block distance of 10 blocks!). Due to that I increased the threshold for the TX_RESTART trigger. The Downside is that the empirical tuning of the -d parameter does not work anymore. So I’ll have to write a tool that determines the -d value automatically…

Update 06/01/2015: I’ve added a new parameter to wifibroadcast for diversity. With -d you can define the number of retransmission blocks that will be kept as a window buffer. The parameter defaults to one (which minimizes latency). However, if you use diversity with wifi adapters of different types, they might deliver their data at different times. The -d parameter should be set so that the window buffer spans over the time difference between the two adapters.
The empirical approach of finding the right setting is simple: If -d is too low, you should see often the message “TX RESTART”. Increase the -d parameter until this message disappears and you have found the correct setting.

An active commenter on this blog, malkauns, has started development on software support for diversity. The general idea is: You use more than one WIFI receiver and combine the received data. Due to multipath and other effects the probability that you receive a single packet correctly increases with each added receiver. Together with malkauns I integrated this feature into the default branch of wifibroadcast.


The usage is extremely simple: Just call wifibroadcast with more WIFI adapters:

./rx wlan0 wlan1

Please note that currently you need two identical adapters. Different types can introduce a delay between the received data streams that currently cannot be processed by wifibroadcast. In this case the systems works but you gain little advantage from diversity.

First measurements

My first experiments showed that the reception quality increases drastically. I tested the feature in a noisy environment with lots of surrounding networks. In every case the packet drop rate was at least one order of magnitude lower when using two adapters. In 90% of the cases it was even better by two orders of magnitude.

Conclusion and outlook

The results are very promising. I think in the future I’ll use diversity on all of my flights. Since the TL-WN722N is very cheap, this feature should be useful to many adopters. There are still some open issues with diversity that need to be solved. The support of different adapter types in parallel is one thing that is planned. Also, there is possibly some potential for smarter FEC in this case. Currently the diversity only works on packet basis. It may be a good idea to dive into their content. We’ll see :)


Get every new post delivered to your Inbox.

Join 45 other followers