Skip to content

Forward error correction for wifibroadcast

July 19, 2015

This post introduces a new feature to wifibroadcast: Forward error correction. This helps lowering the data rate and/or increasing the reliability of the transmission.

Introduction

Early after I published wifibroadcast there were suggestions on improving the forward error correction (FEC) mechanism. While I agreed on these suggestions, my primary goal was to first create something that is usable to see if the idea behind wifibroadcast is worth following. After some months of testing and dozens of other people having also great experiences with wifibroadcast I was confident enough to start to “improve” things.

Initially I implemented a simple retransmission. Each packet got sent several times, increasing the chance that at least one packet got through. This worked sufficiently well but had some issues:

  • Bandwidth: The only parameter to increase the reliability was the retransmission rate. And this rate basically multiplied the net bandwidth. This is quite obvious: If you transmit each packet three times, you need three times the bandwidth.
  • Relieability: Although it seems unlikely that all N retransmissions of the same packet are lost that phenomenon existed. The explanation is quite simple: If you roll a dice often enough, the likelihood of some strange combinations happening needs to be considered. For example, if you roll a million times, you could expect to see the sequence “1”-“2”-“3”-“4” nearly a thousandth times. The same is true for wifibroadcast. We are sending a lot of packets so unlikely things can easily happen.

FEC method

Some commentators suggested to take a look at udpcast, a tool that is able to transfer data over a lossy unidirectional link. And well, that is exactly what wifibroadcast does! The FEC code there was well written and easy to adopt for my use case. In addition, the code is fast enough to run on a Raspberry PI A+. In the code Vandermonde matrices are used to perform the FEC.
The idea behind the code is very simple. Your data has to be divided into packets and a fixed number of them are grouped together to form a block. Lets assume we have a block size of 8 packets. The FEC code now allows you to calculate an arbitrary number of FEC packets (that have the same size as the DATA packets). Lets assume we use 4 FEC packets. If now a DATA packet gets lost, it can be replaced by any of the 4 FEC packets. So we might be able to repair up to 4 DATA packets, regardless which ones of the 8 are affected. This leads to two important advantages:

  • Bandwidth: Since we used in the example above 4 FEC packets per 8 DATA packets, the bandwidth requirements are +50%. This could not have been archived with retransmission. There the “smallest” bandwidth with redundancy would be +100%
  • Reliability: Since any DATA packet can be repair with any FEC packet, the chances of corruptions drop significantly. It is no more important which packets are corrupted, only how many.

Usage

The new FEC feature has been merged into the default branch of wifibroadcast (together with the new fifo interface introduced here)
To update (both on tx and rx!):

hg pull
hg update
make clean && make

The basic usage of wifibroadcast has not changed. Only some parameters have now a different meaning:

  • -b This parameter describes the number of DATA packets per block. So this parameter has more or less the same meaning as with the retransmission version
  • -r This parameter has changed from number of retransmissions to number of FEC packets. Each block will be appended with -r FEC packets.
  • -f Packet size. This parameter is identical to the retransmission version in that it defines the size of a packet. The only difference is that packets now cannot be smaller than -f.
  • It is also important to note that both tx and rx need to agree on -b, -r and -f. Otherwise the transmission does not work anymore.

    How to set the parameters

    The values of the parameters depend on your application. There are two things to consider: The coding rate and the block length. The coding rate is defined by the ratio between DATA and FEC packets. The block length is simply given by the -b parameter.

    • Changing the coding rate while keeping block size constant: Increasing the number of FEC packets increases the reliability of the link at the expense of higher bandwidth. A FEC ratio of 50% is roughly(!) comparable (in terms of reliability) to a retransmission rate of 2. If your block size is 8 DATA packets then this would result in 4 FEC packets.
    • Changing the block lengths while keeping the coding rate constant: An increased block length can increase the reliability of a link without increasing the bandwidth. However, this increases the latency since it needs to be waited until a full block of data has been received before the transmission can begin. Why does this increase reliability? Very often, blocks are only mildly corrupted so that not all FEC packets will be needed. However, these unused FEC packets are useless for the following block. By increasing the block size you are also increasing the “range” where the FECs can be applied.

    Since with this new FEC feature the bandwidth requirements of wifibroadcast can be lowered quite significantly, new WIFI modulations are now usable. I’ve added a new patched firmware under patches/AR9271/firmware/htc_9271.fw.mcs1 that if being copied to /lib/firmware/htc_9271.fw (on tx) enables the MCS1 modulation. This gives you 13mbit/s air data rate of which roughly net 10mbit/s are usable. So this modulation would be perfect to use if you have a h264 data rate of 6mbit/s with a 8/4 coding rate (resulting in a 9mbit/s bandwidth requirement).

    The lower modulation rate should give you a better range if you are flying at high distance or with massive occlusions.

    Advertisements

From → Uncategorized

51 Comments
  1. Nicolas permalink

    Hi, really cool that you released it so quickly 🙂

    I have a question regarding the FEC implementation and packets with wrong FCS:
    As far as I understand, wifibroadcast uses packets with wrong FCS (if “fcsfail” is set via iwconfig) and forwards the payload of them to hello_video.bin because a few bit errors won’t be noticeable and far better than the whole ~1500 byte packet missing.

    Now I’m wondering about the new FEC implementation and what will happen if a packet with a wrong FCS (and thus corrupt payload) is being fed through the FEC receiver. I guess it will be treated as corrupt and be reconstructed by use of the FEC packets (?) So far no problem.

    But what happens, if it can’t be reconstructed? Will the payload still be extracted and sent to hello_video.bin? Or will the whole packet be dropped? I think the usual working assumption in packet based networks and computers in general is, that it’s better to deliver nothing instead of corrupted data. So I’d suspect the udpcast FEC implementation will drop the packet which would be detrimental in this type of application (?)

    Maybe you could shed some light in this, befinitiv.

    Oh, and one more thing I’d like to add:
    I think especially with diversity, interleaving and FEC, small packet lengths are becoming more and more beneficial, because more “good” data will get through in case of lost or corrupted packets. Given the fact, that wifibroadcast uses only very little overhead per packet because there is no IP/UDP/RTSP headers, I’d say an optimal packet length will probably be well below 500 bytes. DVB-T for example uses 288 bytes, maybe around that number is a good starting point for further testing…

    cheers!

    • Hi Nicolas

      I agree on every point you made. Maybe a bit more details will answer your questions.

      I’ve tried to write the receiver so that it gets the maximum out of the data. The repair strategy has 3 steps:

      1) Repair lost DATA packages with good (->crc correct) FEC packages
      2) If we have still good FECs left, we will repair DATAs with wrong CRC
      3) Being desperate: If there are still lost DATA packages and we have some FECs with broken crc we insert them into the lost DATA slots

      Step 1 should be self-explanatory: Lost packages are the worst case so they have preference.
      Step 2 is also the best thing to do since we try to repair the block the “right” way
      The effects of step 3 are still not quite clear to me. But to look at the extremes: If we avoid step 3 we loose a full packet of data at the output. On the other end of the spectrum if the FEC produces complete nonsense, then we have one packet of nonsense in the data stream. My guess would be that both cases are more or less equally bad for the video quality. But the latter one leaves open the possibility that the FEC does a sensible job in those cases… I think a deeper knowledge of the FEC code is needed to answer this.

      Ah, and the fcsfail packets do still help a lot with the new FEC code. They help possibly in step 3 but definitely in the case where a fcsfail DATA packet could not be replaced by a good FEC packet. In this case the DATA packet with the broken crc will be used on the output.

      You are also right that with the FEC code the optimal packet size is somewhere below 1500 bytes. There are two limiting factors for this:

      1) Ratio between payload and overhead (as usual)
      2) CPU performance at the TX

      My tests showed that point 2 is the problem that occurs first. Currently I am using packets of 1024 bytes with a 8/4 coding. But I would really prefer using just 512byte packets with a 16/8 coding. However, my Raspberry A+ couldn’t do that. Maybe with a bit of overclocking that would be possible… I think all the adopters will experiment and report their optimal codings back to us 🙂

      • Nicolas permalink

        Interesting, thanks for your reply.

        My personal feeling (without being good with math or programming) about the FEC producing complete nonsense is, that it will do that 🙂 Atleast that’s my experience with stuff where there is heavy math involved. Change one bit in some kind of advanced compression format and the whole file is garbage. Change one bit in some encryption key or hash and you get complete nonsense also. But like you say, probably won’t matter in the end, lost data or complete garbage data is probably no big difference.

        Regarding data packets that have bad FCS. Maybe it could be improved a little in cases where you have more than one packet with a bad FCS (received from different antennas/cards) by looking at the radiotap header and choosing the one with the highest noise margin, RSSI or signal strength. That one is likely to have the least bit errors.

        Regarding the CPU performance: hmm, that’s too bad. One could just use the Pi2, but I’d like to use the Odroid-W (Pi 1 “clone”) because it’s much smaller and lighter. Is it normal, that FEC uses so much processing power? I have seen some remarks about Pentiums in the source and in general it seems to be heavily optimized. Maybe it’s optimized too much for the x86 architecture, or using features that the arm doesn’t have and has to emulate?

        Maybe it would be possible to let the GPU of the Pi do the heavy math?

  2. Have you looked at trying to send packet loss information back to the sender so that it can drive down the bitrate if necessary. Not sure if you can do this with RPi but you can with x264.

    • This is currently not possible since the bitrate is hardcoded inside the wifi cards firmware. For the moment I think it is ok to switch statically between high quality and long range. This is also beneficial since you don’t need to violate the unidirectionality this way.

      • I mean if there’s heavy packet loss you can push down the video bitrate, not the wifi bitrate.
        This should improve the resilience of the stream but as you say requires bidirectionality. It’s what professional cellular.

        Does monitor mode let you transmit and receive using the same device? Anyway I’ve ordered my equipment and will see if this works with x264’s bitrate reconfiguration using a nuc.

  3. malkauns permalink

    If you’re using raspivid then the only way to “push down the video bitrate” is to restart it with a new commandline as far as I know. Hardly desirable when you’re flying at 40mph. 🙂

    • cbl permalink

      malkauns you are right about that, the camera module needs to stop the capturing process to change the bitrate. There might be a way by splitting the record, defining different custom encoder profiles and switching between them, but it would result in additional encoder cycles and increase the latency. Have a look at http://picamera.readthedocs.org/en/latest/api_encoders.html for details. It will not work with raspivid and i only overflew this chapter so it might be that I just talked bollocks. Also have a look at http://picamera.readthedocs.org/en/latest/fov.html for a brief explanation of how the camera firmware works.
      Kieran, yes, tx and rx are able to use the same interface in parallel but I strongly recommend to let the upstream side talk only in safe conditions and reduce the data amount to a minimum.

  4. cbl permalink

    some experiments later, @befinitiv, I do not want to distract you but this should be a quick task and maybe you could be so kind. Would you add a simple filter, like MAC-address, in an optional argument for rx to ignore packets from other interfaces than specified? That would be awesome.

    • what would be the use case for this? dual channel? in that case you could run each channel on a different port (-p ) and reuse this filter.

      • cbl permalink

        It’s for bidirectional use. I want to transmit some rc-code to change the profile, brightness, contrast, etc for different light conditions. Indoor/outdoor, high noon/sunset etc. With a quick feedback without changing the commandline. I’m using the picamera python interface and a RC-5 inspired, selfmade protocoll. As mentioned before tx/rx in parallel. I’m receiving the code i’ve send on the same interface and the videofeed is interfering with the rc-side, vice-versa.

      • Ah, ok. Cool 🙂 If I don’t miss something then the -p option should work for you.

      • cbl permalink

        well, stupid me! I was aware of the -p option but I was like “I don’t need that, I learn how to handle pipes in python.” – and didn’t thought about it anymore. I’m on track now, thank you very much.

  5. malkauns permalink

    Are you running your video and your controller on 2.4GHz? If so then you should change one of them to a different band (perhaps your controller to 433MHz) to avoid interference.

  6. Walkeer permalink

    This is awesome! This is one professionally implemented FEC 🙂 however, how does it increase the overall latency? I am afraid about the slow CPU on the Tx side.
    In regards to the video stability in case of bad reception I believe very important aspect is key frame interval: is it possible to have it every second or third frame? That would mean worse vide quality or bigger data stream, but probably much better behavior on bad signal.

    Anyway, my opinion is that the biggest issue is the latency, especially for FPV quadcopter racing which I want to do 🙂 does anyone has any ideas where the majority of the latency comes from? The came capture? Is it the GPU h264 encoding? Is it the receiver side doing some unnecessary buffering? Is it laggy camera?

    Thans a lot for your work!!

    • The good thing about the FEC is that you can control everything. You can make the block shorter to decrease latency, you can reduce the FEC packets to reduce CPU load, etc… There are tons of options depending on your scenario.

      It seems as if a big part of the latency comes from image capture. If I remember correcty I measured something about 70ms. You can take a look at the comment section of my diydrones post.
      It’s true, lowering the latency is the last bit that is needed to kill analog FPV in every aspect.

    • cbl permalink

      you can set the keyframe rate with the -g argument in raspivid. I had some success with saving one encodercycle by using only the resolution defined in the hardwaremodes of the camera firmware. Where 1296×730 suits our needs best i think, also switching the h264 profile to baseline gets some ms. I’m down to 80-100ms with those changes. This is what my cmdline looks atm, raspivid -t 0 -n -fps 49 -w 1296 -h 730 -b 3000000 -g 49 -ex sports –profile baseline -o -. The low bitrate is due to noisy environment.

      • Walkeer permalink

        You have end-to-end latency of 80-100ms? Could you make some photos? It would be almost perfect, definitelly usable for close proximity flying. For regular sw x264 codec there is a “-tune zerolatency” switch or for minimal latency, perhpas there is simmilar switch for raspivid or for the hw coder as well? The baseline helps because I believe it disables forward (future) frame reference, for witch the codec has to buffer several frames to be able to reference future frames. Once My camera arrives I will some experimentation woth the commands.

      • cbl permalink

        I did some tests, you will find my testsetup and results over here http://fpv-community.de/showthread.php?64819-FPV-Wifi-Broadcasting-HD-Video-Thread-zum-Raspberry-HD-Videolink-fon-Befi&p=839307&viewfull=1#post839307 , including a picture. It’s in german but translator will do. I’ve used netcat over tcp/ip but you will get similar results with wifibroadcast, maybe better. My goal was to find the best settings for the encoderpart. I do not know if its possible with another pi as receiver, my groundstation is a chromebook running arch and i’ve played it with mplayer, not hello_video. When you are using a resolution unsupported by the firmware, it captures and encodes in one of the hardwaremodes, takes the result and sends it again to the encoder to rescale. Using one of the supported resultions saves that cycle. The forward-frames are not supported by the encoder, so none of the four available profiles uses them. The chosen profile ‘baseline’ is the fastest available. I haven’t read something about a ‘zerolatency’ option in the cameradocs, otherwise I would have tested it 😀 . Share your results after experimenting, I’m quite interested and I guess others too.

      • Your note regarding native resolution of the HW h264 encoder is very important! So we have to make the rescaling on the RX side. If I got it right, using the HW supported resolutions will actually improve the picture quality as we will avoid another recompression from and to h264 multiplying all compression errors, but we still have to rescale the image to match the FPV LCD on the Rx. Any idea if there are some changes in the h264 encoder in rapsberry 2 such as more HW supported resolutions?

      • Unfortunately, the page reference by cbl on other forum explains a lot: http://picamera.readthedocs.org/en/latest/fov.html

        For example, it explains why all the 720p60Hz videos on youtube using wifibroadcast have such bad vido quality which is nowhere near true 720p. If the page is right, it is because the only HW mode which supports 60Hz is 640×480:
        “A resolution of 800×600 and a framerate of 60fps will select the 640×480 60fps mode, even though it requires upscaling because the algorithm considers the framerate to take precedence in this case.”

        So when using 720p60, it actually uses 640×480 and then upscale it to 720p, producing picture comparable to the best analog PAL CMOS 1200TVL FPV cameras.

        So, in order to have true HD quality, we have to use 1296×730@49Hz HW mode. I really hope that rapsberry 2 will allow us to use 60Hz.

        Befinitiv, can you somehow comment on this one? This seems like a major finding. Many thanks cbj that you have shared this info with us.

      • cbl permalink

        afaik the hardwaremodes are dependent from the camera firmware, using the pi 2 has no influence. I’m not into hello_video and it’s configuration, maybe there’s a way of cropping the modes 1296×972@42fps (4:3) or 1296×730@49fps (16:9) instead of rescaling, you would loose some degrees FOV but it would be much faster. In my opinion there’s not much difference in 60fps to 49fps, also decoding it a bit faster, like 60fps, keeps the cache empty.

      • That sounds interesting (native modes) but I don’t see immediately a big influence for latency in that. Shure, resizing the image is one Operation that introduces latency but it shouldn’t be signifikant since the op is very simple. But I don’t know much about the internal Pipelines.Maybe they screw up and it does take much time indeed 🙂 Has anyone yet made some comparative measurements?

      • I finally have the camera and I did some tests. First of all, I confirm that when I set fps >49 the picture quality is terrible, meaning that 640×480 HW mode has been used. However, I do not see any quality or latency change when using HW mode 1296×730 compared to 1280×800 (native resolution of Headplay HD goggles), all with 49fps. I tried many combinations and within limits (no more than cca 1300×800), its all the same. I am streaming through ethernet and the end to end latency including my LCD (rougly 20-30ms) is 120-140ms. Therefore, I believe that either there is a new HW mode which supports 1280×800@49 or it does not work as explained with the resolution.

        here is quite interesting article about low latency IP video: http://www.design-reuse.com/articles/33005/understanding-latency-in-video-compression-systems.html
        the result is that constant bitrate is supposed to add some latency compared to constant quality mode, will try soon.

  7. In case nobody noticed, there is a new, smaller and ligter version of Raspberry A+: https://www.raspberrypi.org/blog/raspberry-pi-model-a-plus-on-sale/
    this seems to be the perfect Tx solution for our quads and airplanes. Any ideas how you connect get shell access?

    • cbl permalink

      For shell access I’m starting the pi as an Accesspoint with ssh enabled and switch to monitor with wifibroadcast with a shellscript. Others are using a bluetooth to serial adapter connected to serial console. I’m using the A+ but compared to rc-stuff it’s still heavy and bulky.

      • thanks for good idea 🙂 compared to analog SD stuff sure, but compared to Connex HD or lightbridge, its actually pretty small and light 🙂

      • flow permalink

        You can use the odroid w.
        Its compatible to the raspberry and its smaller…

      • Walkeer permalink

        Unfortunatelly, I cannot,it is not being produced anymore 😦

    • chris permalink

      I used an apple usb>ethernet adapter http://goo.gl/bbhhC7 to get shell access on my A+. I noticed the same thing you did with camera HW modes, no significant latency difference between 1296×730 and the standard 720p raster 1280×720.

  8. Constantin permalink

    Hey,
    I wanted to say thank you for wifibroadcast,it definitely makes the link more reliable for long range fpv.
    I was using the rpi hardware for 1/2 year before, with a normal network wifi hotspot link, and I did a lot of research and testing about latency I want to share here,too.
    1) H.264 has 3 types of frames: I,P and B frames. The problem are any B-frames ( bidirectionally predictive-coded frames ) as the not only refer to “past” frames (like I and P) but “future” frames,too. To encode/decode them you have to wait for more frames coming in, which increases latency. Fortunately,the baseline standart doesn’t include B-frames, so I compared the latency using pf=high and pf=baseline. But,surprisingly it didn’t make any difference ! The reason: allthough using high profile the rpi encoder produces not any B-frames, not according to the standart; and so do the most mobile phone h.264 encoders. Result: on rpi always use pf=high, even without B-frames it is more effective in reducing data than pf=baseline. With webcams it may be different.
    2) higher fps => lower latency . this seems to be true for all resolutions and is easy to explain: The encoder buffers a specific amount of frames, the decoder,too. Overall latency is number of framebuffers * 1000ms/fps + latency by network buffers etc.
    But,watch out: the rpi is only capable of handling 49fps on 720p, if you use 720p 60fps it will use the resolution which matches with this fps (600*480 in this case) and upscale it.
    The best to use (in my opinion) 720p 49fps and 600*480 90fps
    3) how to reduce the number of frames the encoder buffers: none (hardware);
    the number of frames the decoder buffers: = number of threads the decoder uses (only on a desctop pc,where encoding is done in the cpu) this information is according to an articel i found on intel developer; but i found no way to controll the number of threads in the gstreamer decoder (it simply ignores the parameter)

    4) latency testing using a rpi connected via ethernet to my 3years old laptop:
    1.1) 720p, 49fps, pf high,Bitrate variabel
    1Mbit/s ~110ms
    2Mbit/s~110 ms
    4Mbit/s~120ms
    5Mbit/s~120ms
    7Mbit/s ~100ms bis 9SEKUNDEN

    1.2) 49fps, 3Mbit/s , variable resolution
    720p ~90-170ms
    600*480 ~ 90-140 ms

    1.3) var. Fps, 3Mbit/s mit 640*480 und 720p
    640*480 25fps 160ms
    640*480 35fps 160ms
    640*480 60fps 100ms
    640*480 90fps 90ms ZIEMLICH KONSTANT
    720P 25fps 170ms
    35fps 150ms
    49fps 130ms

    1.4) variable intra refresh period 1280*720 3Mbit/s pf=high
    -g =1 1.5Sekunden
    -g=10 90ms-170ms
    -g=100 stark schwanken 90ms-170ms

    1.5) 720p pf high und pf baseline
    720p 25fps baseline 170ms high 170ms
    720p 49fps baseline 80-170ms high 90-170ms

    The only problem: using the “new wifibroadcast” code I have ~500ms sec letency at the begining, rising up to 2sec or so.
    Seems like there is something in the system too slow,probably the rpi in the air with 100℅ cpu

    Lg from Germany
    Consti

    • Iver permalink

      Great work, Constantin, and thanks for sharing. It matches what I´ve seen both using ethernet cable connections fra RPI to linux pc with mplayer or using RPI A+ to RPI B+ via wifibroadcast. Especially your items 4) and 1.4). Currently I’m using -pf main as it seems to deliver circa the quality of High but with circa the latency of Baseline. Did you try Main and-or measure it?
      Rgds, Iver

      • Constantin permalink

        Unfortunately i didn’t try main at all. Are you sure you are getting lower latency’s by using it ? And,do you have assumptions why ?

  9. Iver permalink

    No, not sure. Its just an estimate. BTW quality is lower than high but better than baseline.

  10. schalonsus permalink

    Have a major problem with the new code. After reaching max distance and signal loss, signal does not recover, even not when i am back at my groundstation. I have to restart the RX Pi to get the signal back. Can’t remember having this problem with the old code.
    Just for information.

    • schalonsus permalink

      Problem solved. It was my own fault. I deleted the variable packet_length because i thought it causes interference. Now with it back in the code the videolink works as expected.

      • schalonsus permalink

        Ok, further testing revealed that this failure was not caued by the missing variable packet_length. It is caused by the fecs code. If i use fces=0 and therefor -x 3 for example the connection can be lost for 10min and the link comes back instant. If i use fecs then if the connection is lost for a few seconds it will never come back….

      • Thanks for the report! I’ll try to reproduce the behavior this evening to track down the cause.

      • One more question: Did you use a Raspi as a receiver? And by connection lost do you mean “bad reception with lots of packet drops”. Under such circumstances I’ve noticed that sometimes the Raspi H264 decoder freezes. But that took around 1 minute of (unflyable) bad reception.

      • schalonsus permalink

        Yes, i use a Pi A+ as receiver.
        By connection lost i mean that the picture freezes. I walked 1.7km and then after a bit of bad signal the picture freezed. And it wont come back even when rx and tx are next to each other again. After restarting RxPi videolink gets received.
        I reproduced this failure at home. 2nd floor the transmitter and receiver in the basement. There i have a range of 5m from good signal to signal loss. After a few seconds with no signal the picture freezes. With fecs off and retransmission on, this problem does not occur. There the picture also freeze when no signal but plays as soon as some data is received.

        Could be the problem with H264 decoder, as you said. But it really takes not long to freeze.

        w 1280
        h 720
        fps 48
        bitrate 4000000
        keyframerate 8

        blocksize 8
        fecs 4
        packet_length 1024

        Tested with default branch and low_lat_raspivid_hook branch if this does matter.
        Also tested with mcs1 and mcs3 fw.
        Tx is awus036nha and rx is wn722n.

        Thanks for your effort

      • schalonsus permalink

        Forgot to mention i use active hub on rx pi and two WN722N sticks.
        Strange problem today, i can’t get diversity to work anymore. Always message “Could not fully reconstruct block….”. With just one stick it works good. I did not change anything.

      • schalonsus permalink

        Ok, did some testing.
        The diversity works after reinstalling raspbian and using higher rated power source. But the problem with connection break with fces on persists. Also i noticed i can’t record the video on the sd card anymore without having a slow motion stream. Really annoying.

        I will revise downward to the code from mid June, there i had a stable downlink and recording was working. Really dont see any benifit from the new code.

      • schalonsus permalink

        ….after further testing…
        i cant get the rx pi to work anymore, maybe its processor is damaged, i dont know.
        But im sure this problem is centred to my equipment.

        Luckily i got an old laptop and installed ubuntu on it. With gstreamer i have latency of 130ms and diversity is also working again. I’m happy so far but would be happier if i could understand why it does not work anymore with my rx pi.

        Tomorrow i will test if i have still the connection lost after some time of bad reception, but i think this problem will be solved too with the laptop.

      • Schalonsus, can you please post exact settings and setup which gives you constant 130ms latency and diversity? Many thanks, this looks very good indeed

      • schalonsus permalink

        Here you go:
        TX:
        Raspberry A+, AWUS036NHA, Fiveleaf, MCS3 Firmware, regdb/crda 30dbm, 1280×720, Bitrate 4000000, Keyframerate 8, Packet_length 1024, Blocksize 8, FECS 4, -ex sports, -awb horizon
        RX:
        Laptop E8110 Ubuntu, 2xWN722N (5t Helix, Fiveleaf), recording on hard drive, (sudo ./rx -b 8 -r 4 -f 1024 wlan1 wlan2 | tee $FILE_NAME | gst-launch-1.0 -v fdsrc ! h264parse ! avdec_h264 ! xvimagesink sync=false)

        TX and RX are both running the latest code from default branch
        Will try the MCS1 firmware next.

        Here are three videos testing range


      • Wow, very brave the 2km video!

      • schalonsus permalink

        Thanks!

        Finally i have solved the problem with the raspberry as receiver, it has something to do with the HDMI. If i choose a specific mode, it works.

  11. chris permalink

    I am having trouble figuring out the relationship between FEC parameters and the data rate presented to the wireless NIC by the tx program. I have found that the ralink chipset stuff I am using does not accept iw set commands for MCS data rates, but it will accept them for legacy rates. I am trying to choose the lowest legacy data rate possible that still presents acceptable picture quality, the lower the data rate the more robust the OFDM modulation. If I choose a video bitrate that is too high, the TX side will start to buffer or drop packets and I start getting large latency issues. Let’s say I pick a video bitrate of 5Mb/s an 8/4 code rate and 1024 packet length, how do I compute the theoretical output rate of tx (let’s pretend the video rate is absolute)? The next question is what is the actual throughput of the legacy bitrates. Everything I’ve read online that computes “actual throughput” is based on overhead from TCP headers which doesn’t apply here.

    I looked at the source for tx.c and ran across the formula for the “interface rate” that is echoed to the terminal while the program is running.
    1.0 * pcnt / param_data_packets_per_block * (param_data_packets_per_block + param_fec_packets_per_block) / (time(NULL) – start_time))
    The part I can’t figure out is why this doesn’t include the packet_length (-f) variable? Isn’t that necessary to figure out the data rate? I was poking around because I wanted to see if I could alter the source to display the interface rate in Mb/s (like using the linux pv utility) because that is how all the MCS or legacy bit-rates are expressed. That would help with troubleshooting as I could see that with a particular FEC setup my overall bitrate occasionally peaks over my chosen channel throughput, I could then either back off the video data rate or choose a higher legacy bitrate.

  12. chris permalink

    I think I’ve answered my own question. Please correct me if I’ve missed something (likely).
    The interface rate is actually the packet rate, to get Mb/s I should do the following.
    (packet_rate * packet_length * 8)/1 x 10^6

    If I choose 5Mb/s video rate, an 8/4 code rate should add 50% overhead making the stream 7.5Mb/s fitting within legacy 9M. I am interested in what sort of internal overhead (packet headers and the like) there , ie for every packet length -f there are x bytes of header data.

    Thanks for this great program, once I get a rx stick with external antenna capabilities I should be in business.

    • Yes, that is correct.

      The total overhead is a bit hard to estimate. On top of the headers you have also the CSMA which will take some bandwidth and other effects as well. I would just try what data you can get through.

  13. chris permalink

    Thank you, that reminds me that I meant to post this. https://github.com/vanhoefm/modwifi Is a security research project where the author modifies the firmware of Atheros based commodity wifi hardware. He disables the CSMA, as well as other tricks, to make a broadband RF jammer, among other things. It seems like you could use this to increase throughput.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: