Skip to content

Diversity for wifibroadcast

May 24, 2015

This post describes a new feature of wifibroadcast: Software diversity.


Update 06/02/1025: The new diversity code from 06/01 worked fine on my desktop pc but showed some rare problems on a Raspberry A+. The TX_RESTART was triggered sometimes by out of order packets (with a block distance of 10 blocks!). Due to that I increased the threshold for the TX_RESTART trigger. The Downside is that the empirical tuning of the -d parameter does not work anymore. So I’ll have to write a tool that determines the -d value automatically…

Update 06/01/2015: I’ve added a new parameter to wifibroadcast for diversity. With -d you can define the number of retransmission blocks that will be kept as a window buffer. The parameter defaults to one (which minimizes latency). However, if you use diversity with wifi adapters of different types, they might deliver their data at different times. The -d parameter should be set so that the window buffer spans over the time difference between the two adapters.
The empirical approach of finding the right setting is simple: If -d is too low, you should see often the message “TX RESTART”. Increase the -d parameter until this message disappears and you have found the correct setting.

An active commenter on this blog, malkauns, has started development on software support for diversity. The general idea is: You use more than one WIFI receiver and combine the received data. Due to multipath and other effects the probability that you receive a single packet correctly increases with each added receiver. Together with malkauns I integrated this feature into the default branch of wifibroadcast.

Usage

The usage is extremely simple: Just call wifibroadcast with more WIFI adapters:

./rx wlan0 wlan1

Please note that currently you need two identical adapters. Different types can introduce a delay between the received data streams that currently cannot be processed by wifibroadcast. In this case the systems works but you gain little advantage from diversity.

First measurements

My first experiments showed that the reception quality increases drastically. I tested the feature in a noisy environment with lots of surrounding networks. In every case the packet drop rate was at least one order of magnitude lower when using two adapters. In 90% of the cases it was even better by two orders of magnitude.

Conclusion and outlook

The results are very promising. I think in the future I’ll use diversity on all of my flights. Since the TL-WN722N is very cheap, this feature should be useful to many adopters. There are still some open issues with diversity that need to be solved. The support of different adapter types in parallel is one thing that is planned. Also, there is possibly some potential for smarter FEC in this case. Currently the diversity only works on packet basis. It may be a good idea to dive into their content. We’ll see 🙂

From → Uncategorized

37 Comments
  1. malkauns permalink

    I had a few issues with the latest code so made some refinements. The first issue is ofcourse if 2 adapters capture at different speeds. To solve this problem I have introduced a cache that will always write out the earliest block (by which time all adapters should have had a chance to provide data). This *may* introduce lag depending on how large the cache is. I’ve set it to 5 blocks and do not notice any additional lag. The other issue is that when the faster adapter loses packets or doesn’t receive data at all, the image becomes garbled even though the slower adapter is receiving perfect data. I tried to handle this in the single threaded structure provided in the code but somehow doing this in a single thread still gave bad results. For some bizzare reason packet loss on 1 adapter caused packet loss on the other. Moving to 1 thread per adapter solved this problem and we now have *true* diversity where one adapter fills in for the other.

    code here: http://pastebin.com/nSHbsiXh

    Let me know what you think.

    • Great work! I haven’t had the chance to look at the code in depth but I’ll soon do! Before that some words to explain the behaviour you were seeing: There seems to be a bug in libpcap when using it with select (refer to https://github.com/the-tcpdump-group/libpcap/issues/380 ). The commented code in my main loop runs exactly into this issue. To overcome this, I’ve disabled select and put the devices into blocking mode. This way a device that does not receive data blocks the other device. And that seems to be what you were experiencing. One way around this is to reduce blocking by lowering the timeout. This works well but produces some unnecessary CPU load. Another (and more elegant) solution is your multi-thread approach. When I have the time I would still like to test the workaround suggested in the link above. I’m a big fan of select 🙂 I’ll post back when I’ve tried out that fix and of course also your code!

      • malkauns permalink

        Some refinements: http://pastebin.com/BDBUKJKf

        Tested this with 4 adapters simultaneously and its working very well. I get a very clean feed even on channel 6 which is noisy where I am. Turning on my Taranis however is a different story but there’s nothing I can do about that until i start using my DTF UHF gear which will be happening soon. 🙂

      • Thanks for sharing! Question concerning line 235: Does this corruption also occur if you checked that checksum_correct is true? This might indicate that there is a bigger bug hiding somewhere.

  2. malkauns permalink

    I’ll have to do more digging to answer that question. One thing I changed though is that I do not write anything to packet_buffer_list unless the checksum is correct. This made a major improvement to stream quality when using diversity.

    • Nicolas permalink

      That’s an interesting observation. I would have suspected, that letting defective packets through gives better results because my thinking was that a bit error here and there doesn’t make a big difference.

      Now the question is, do the defective packets have so many bit errors that they’re basically ‘just garbage’ or is it that the h264 implementation in the pi can cope better with no data than with wrong data? Or a combination of both 🙂

      • My experience was that defect packets do help improve the video (although that was not quantified). However, it is very important to avoid that correctly received packets are not being overwritten by a following defect packet. If this check is missing then defect packets do more harm than good. Malkauns, did you have such a check in your code?

  3. malkauns permalink

    Finally got my DTF UHF gear on my quad to avoid 2.4GHz interference and there is a major improvement in the video quality and range as expected Here’s a short video:

    I’m using 2 Alfa AWUS051NH and 1 TP-LINK TL-WN722N for receiving the video feed. Diversity appears to be working well. The only problem I have now is that the camera on the Pi appears to cut off randomly when flying. Have to do some debugging to find out why this is. For the future I plan to use better antenna’s for longer range (using only the stock antennas that came with the cards at the moment). Any suggestions? I’m not planning to go out miles. Just want a clear uninterrupted video feed within say, a 100-200 meter radius.

    • Jozsef Voros permalink

      The linked video (actual fpv with diversity) is a bit jumpy, maybe missing frames?.
      It’s just because of the youtube conversion, or the original is the same too?
      If it’s really jumpy, what is the reason? The h.264 encoder? Or the low bitrate? The transmission seems lossless, I can’t see such kind of artifacts. However, the picture quality is below 720p quality, not only smoothness is the problem but loss of details in low contrast areas as well. Not sure if it’s a poor compression algorithm, low bitrate, or the camera modul itself?

      • malkauns permalink

        Yes, the jumpiness is because of the skipped frames. I guess those are moments where there was not enough data captured through the air to make up a frame. In actually reality there is a slight half second pause when it skips like that. Youtube does do terrible damage to the video when converted into whatever it is they use. The framerate of the original video is smooth like butter minus the skipped frames. FYI here’s my raspivid commandline:

        raspivid -ih -t 0 -w 1280 -h 720 -fps 40 -b 3500000 -n -g 5 -o –

        I’m not sure what I can do about the skipped frames. Let me know if you have any ideas. I’ll post the original h264’s next time so you can see what i’m talking about.

  4. Nicolas permalink

    Not sure if circular polarized antennas also work better with digital Wifi transmission. Did some googling, but couldn’t find any comparison. Assuming that it also works better with wifi and you want it cheap, I’d do the following:

    Here are the dimensions of the Circular wireless CPatch12 Antenna. A few people have used those dimensions and scaled them down to 2.4Ghz which seemed to work for them. You just need some copper-plated PCBs, RP-SMA pigtails, some spacers/screws, a soldering iron and an exacto knife for removing the copper.

    lag.lt/cpatch12-dimensions-and-details/

    This antenna has a 90 degree beam and 8.6dbi gain. Build five of those Antennas and arrange them like a cube so that you have every direction covered. This should give a high-gain “virtual” omni antenna without the annoying top-null, so that you can fly around you in every direction and even directly above you.

    If there’s enough space (around 6x6x6cm) inside the cube, you could put the pi, wifi sticks and battery in the middle to have a neat and compact groundstation.

    For the transmit antenna, I’d use a cloverleaf.

  5. Jozsef Voros permalink

    Considering repeated packets and/or diversity, usually there are concurrent packets with the same information. What happened if there are several received versions of a given packet, but all of them have wrong checksum? How to pick the “better” one? Or how to use all of them in order to get the most correct bits possible?
    Probably higher redundancy FEC is better use of the bandwith then repeating, and short packets are better in case of diversity and moving Tx.

    • Nicolas permalink

      I think it’s impossible to really know which packet has less bit errors when there are more than one with wrong checksum. One could implement some kind of more sophisticated chechsumming in software, but I guess that’s not worth the effort. It will also consume bandwidth and processing power when it’s done in the payload and in software.

      But maybe one could look at the 802.11 header and choose the packet with highest RSSI or SNR, that should in most cases be the one with the least bit errors.

      I also think going with smaller packets is better in case of diversity. This way more good data will get through because a single error will ‘affect’ less data. I.e. if you have an error in the first 500 bytes of a 1500bytes packet the whole packet will be considered bad. With 500byte max. packet size that would have been two good and only one bad packet. Ofcourse there is more overhead then, but since there is very low overhead anyway because no IP/UDP is used I think that is okay.

  6. malkauns permalink

    Can you explain what you mean by “higher redundancy FEC”. It would be good if there is a way of prioritizing what is re-transmitted (eg. critical parts of h264 frames) in order to avoid total video loss.

    • Nicolas permalink

      I wrote about that on the main wifibroadcast article here already. Right now, as far as I understood, wifibroadcast sends packets twice and in blocks. This is quite a waste of bandwidth and far from ideal. Much better would be a ‘real’ FEC/interleaving implementation like udpcast in async mode. This should give a way more stable picture and still use less bandwidth than the current ‘send everything twice’ implementation. That saved bandwidth could be used for higher bitrate stream and quality or the same bitrate stream but with lower wifi bitrate for even better stability/range.

      Not to confuse: I am talking about FEC/interleaving on the higher OSI layers here, not the FEC that is already implemented in layer 1, that alone is not sufficient.

      I did some first tests with udpcast now (over wired interface, wifi monitor interface ofcourse doesn’t work). I can pipe raspivid into udpcast and display the video with the modified hello_video tool on the other pi. Next I will try to simulate some packet loss and packet corruption with linux traffic control (tc) and see how it copes with that.

      https://www.udpcast.linux.lu/satellite.html

      • Martin permalink

        Nicolas,
        Could you share the pipeline you used with udpcast and the modified hello_video?
        I have used iptables to simulate packet loss. Using cyclic intra refresh also helps to cope with packet loss because affected area is smaller.

  7. malkauns permalink

    I made some changes to the code to allow corrupted blocks through if there were no correct packets that passed the crc check. I have to do more testing but the I think it has improved the smoothness of the stream a bit with respect to missing h264 data. Now instead of the video freezing for half a second or so while it missed data, it glitches instead (I’d rather see moving garbage than nothing at all). I know this was the intent of the original code but it was not working with diversity for me. Here is a video:

    raw h264: https://www.dropbox.com/s/ct6eqxdz9qe10ap/stream01.h264?dl=0

    The glitches in the above video actually appear worse than they were in real life. Instead of half the screen picture damaged it actually was a lot less in real life.

    Latest refined code: http://pastebin.com/ykKtWPVM

    • Hi malkauns

      If you want you could try the current master of wifibroadcast. In there I’ve added a -d flag which allows you to set the window buffer size in units of “retransmission blocks”. wifibroadcast now also has a printout (TX RESTART) that helps you finding the right value for -d (to minimize latency). You can take a look at the updated diversity blog post and try it if you like (I only have two identical adapters 😉

      • malkauns permalink

        Thanks, I’ll take a look when I get a chance. 🙂

  8. Sorry for crossposting (ECC, error correction codes, https://befinitiv.wordpress.com/2015/02/26/improved-retransmission-scheme-and-multiplexing/#comment-407 ). DVB COFDM uses ECC. I tested DJI LightBridge video and telemetry transciever, and with the help of COFDM, it sends HD video for 2km (in a city) with just 100 mW 2.4GHz band transmitter powering 2 cheap omnidirectional antennas. Btw, LightBridge send packets in a parallel, using all locally available Wi-Fi channels 🙂

  9. Roberto Sale permalink

    Hi! I was seeing what could have been my huge delay problem.
    My interface rate using cat /dev/urandom | sudo ./tx -f 1024 mon0 is of about 2000.000

    My interface rate using gst-launch-1.0 -v v4l2src ! ‘video/x-raw, width=640, height=480, framerate=30/1’ ! omxh264enc target-bitrate=500000 control-rate=variable periodicty-idr=10 ! h264parse ! fdsink | sudo ./tx -r 2 mon0 is about 54.000

    Why is so slow? Is becouse of the encoding?

    Thanks!

    • malkauns permalink

      install pv and then run:

      gst-launch-1.0 -v v4l2src ! ‘video/x-raw, width=640, height=480, framerate=30/1′ ! omxh264enc target-bitrate=500000 control-rate=variable periodicty-idr=10 ! h264parse ! fdsink | pv > /dev/null

      This will show you what speed you’re getting data from gstreamer.

      • Roberto Sale permalink

        Hi Malkauns, thanks for the repply. It gives me an error:

        pi@raspberrypi ~/fpv/wifibroadcast gst-launch-1.0 -v v4l2src ! ‘video/x-raw, width=640, height=480, framerate=30/1′ ! omxh264enc target-bitrate=500000 control-rate=variable periodicty-idr=10 ! h264parse ! fdsink | pv > /dev/null
        WARNING: erroneous pipeline: no element “video”
        0B 0:00:00 [ 0B/s] [

  10. Roberto Sale permalink

    It was the semicolon after “framerate = 30/1′ ” . The speed is around 20kB/s. This is so slow. Is that expected?
    Thanks!

    • malkauns permalink

      Oh, your bitrate is low and your resolution is only 640×480. So 20kB/s may be about right. I use 4500000 now for my bitrate and set my resolution to 1280×720. That gives me between 400-600kB/s.

  11. Roberto Sale permalink

    Hi. I was testing the encoding, and I recorded a video. Im using this command:

    gst-launch-1.0 -v v4l2src ! ‘video/x-raw, width=640, height=480, framerate=30/1′ ! omxh264enc target-bitrate=500000 control-rate=variable periodicty-idr=10 ! h264parse ! fdsink | /opt/vc/src/hello_pi/hello_video/hello_video.bin

    That was for trying only the encoding and decoding, and as you might see in this video:

    it’s so slow!
    Later,I streamed mjpeg data from my camera(which is capable of) using gstreamer with udpsink pipe, and the delay is <100ms for sure.

    So, I tried to stream data with your ./tx program, using fdsink , and in the rx with:

    sudo ./rx mon0 | gst-launch-1.0 fdsrc ! jpegdec ! autovideosink sync=false

    And it worked… kindly. The inyection rate is so slow, 100.000. The default firmware of my wn722n has the inyection rate fixed, and the one provided by you isn't working for me.
    The other problem I have is that eventually come this error:

    ERROR: from element /GstPipeline:pipeline0/GstFdSrc:fdsrc0: Internal data flow error.
    Additional debug info:
    gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstFdSrc:fdsrc0:
    streaming task paused, reason error (-5)
    Execution ended after 0:00:14.815057627

    and the player stops. But at least I have a clue, MJPEG is the only way to do streaming with my webcam. If I cant reach the data rate for the mjpeg streaming, I will have to buy a raspberry pi camera, which here costs around 80 dls :/

    Do you have any ideas to make this working with my webcam?

    Thanks a lot!

  12. Hi

    Would the following also work to replace a PI and Pi camera?
    http://www.banggood.com/BPI-D1-Open-source-HD-Mini-IP-Camera-Wifi-Module-For-Banana-Pi-p-965977.html

    It would make the setup needed on the quad/plane very small.

    Very interesting work, Awesome!!
    Gerrit

    • Martin permalink

      That’s a very interesting module. The camera is not as good as RPI’s though.
      Seems that the documentation is a bit sketchy, but it should get better when more people start using it.

    • Does it run Linux inside MCU? Is it chance exist to break into console, or see a dmesg output?

  13. This night I’m talking to myself , as module is too way interesting:) But It is not cheap. For additional $20 there’re functional IP-camera modules with two PCBs, sensor module with CS-lens, embedded Linux, networking, GPIOs, etc.

    Here is a good description of the BPI-D1 board: http://www.cnx-software.com/2014/11/13/linux-based-bpi-d1-hd-camera-module-features-anyka-ak3918-arm9-processor/

    Btw, they write – it boot Linux from SD-card. It may be useless without one – with proper Linux image loaded.

    Specs for AK3918 soldered in BPI-D1: http://blog.chrobi.me/wp-content/uploads/2015/01/AK3918-HD-IP-Camera-SoC-Specification.pdf

    It seems possible (but VERY time consuming) to port Ardupilot code to this MCU, and fly it. All needed sensors (IMU, compass, GPS, sonar) can be connected with SPI.

  14. Would you accept a patch for tx diversity as well? I want to use a 2.4GHz and a separate 5Ghz tx to reduce problems in crowded spectrum environments.

    • That sounds not too complicated. Are you on bitbucket? Then you could send me a pull request. Then I can take a look into it.

      • I’m not on bitbucket, but I will have a look at the weekend.

  15. Has anyone tried diversity on transmit side to reduce latency?
    For example: to use two or more cards and different Wifi channels to send data to receiver with this same count of receiving wifi cards.

    By splitting wifibroadcast stream would be possible reduce transmitting latency witch depends on number of channels.

    Assume we have stream at 3Mps and 30 fps, thus latency over slow stream would be equal 33 ms. But if we broadcast over two channels (both at 3Mps) then broadcast latency itself should reduce to 16.6 ms.

    Thus if some one get 80 ms at the moment thus using two cards possible reach 80 ms -16.6= 63.4 ms.

    • malkauns permalink

      In my experience the transmission only adds about 20-30ms of lag. So this would only be a 15ms reduction in latency at most. Feel free to perform your own experiments though. 🙂

  16. Great work guys. Could you tell me how to implement this into the V0.4 img ? I am a little lost on how to add this feature but would like to use it.

Trackbacks & Pingbacks

  1. Wifibroadcast Makes WiFi FPV Video More Like Analog | Hackaday

Leave a reply to befinitiv Cancel reply