Skip to content

Improved retransmission scheme and multiplexing

This post presents some changes to my wifibroadcast project (https://befinitiv.wordpress.com/2015/01/25/true-unidirectional-wifi-broadcasting-of-video-data-for-fpv/ ) that improve reliability and functionality

Improved retransmission

When playing around with my rx and tx tools I noticed something odd. I tried to find the right setting for the retransmission count. To recall, the retransmission count is the number an identical data packet is being sent. The idea is to simply increase the probability that at least a single packet makes its way. The strange thing I’ve encountered was that it made no difference whether I was sending the data two times or six times. In both cases I had nearly the same packet loss. By using wireshark I was able to find the cause of that problem: Beacon frames from hidden terminals.

The hidden terminal problem:

A ~~~~ B ~~~~ C

Assume A wants to talk to B. Before A sends a packet it monitors if the channel is free. If so, it sends its packet. Unfortunately, C is doing the same thing as A at the exact same time. Since A cannot head C and vice versa since they are too far apart they both assume the channel to be free. At station B the frames from A and C collide and get lost.

What does this have to do with retransmission? Well, beacon frames are sent at the lowest possible rate (1Mbit/s). They carry usually between 150 and 250 bytes so the duration of them can be up to two miliseconds. In contrast to that the wifi card I used was sending its data at 26Mbit/s. Thus the duration of the frames was significantly shorter. Because of that, a single beacon frame from a hidden station was able to destroy a whole retransmission block. Lets visualize that. Assuming a retransmission rate of 3 the data packets a,b and c would be sent like this:

aaabbbccc

Now lets assume a beacon B starts after the first packet of a:

aBBBBBBcc

Whops, you have just lost the packet b because the beacon blocked every transmission of it.

The solution to this problem is simple: Retransmission blocks. Packets are gathered to form a block which is then sent several times. Let’s assume a block size of 8 packets, a retransmission rate of 3 and the packets a,b,c,d,e,f,g and h to send. On the air the packets are now sent as follows:

abcdefgh,abcdefgh,abcdefgh

When our nasty beacon arrives it looks as follows:

abcBBBBBBcdefgh,abcdefgh

The total number of packets is identical but we lost no packet :)

This comes at a little price: Latency. But if the blocks are small enough it is pretty low. I found block sizes of 8 or 16 packets quite useful. They really made the link quality much much better. If you choose a block size of 1 then the program behaves as before (this is also the default setting). A little caution is needed: both rx and tx need to agree on the block size!

What retransmission rates are sensible? In my experience you need at least two. Then the video stream will be mostly error-free at good reception. If you are having a bad link (long range or high noise) that you should set this value as high as you can. This really improves the range you can get. The limit for this factor is the available bandwidth. Just multiply your video bitrate with the retransmission count and try to fit that product into the maximum bandwidth (Assuming a TL-WN722N this would be roughly 14mbit/s at MCS3 26mbit/s air data rate)

Multiplexing

The first version of rx and tx was able to transfer only a single stream of data at once. Since it would be nice to transport also other data over the same channel (think of GPS, etc) I added a “port” feature to both programs. The last byte of the fake MAC address used by rx and tx is replaced by this port number. This allows you to define up to 256 channels.

Testing video transmission using the raspberry camera

On the receiver side:

sudo ./rx -p 0 -b 16 wlan0 | gst-launch-1.0 fdsrc ! h264parse ! avdec_h264 ! xvimagesink sync=false

On the raspberry side:

raspivid -t 0 -w 1280 -h 720 -fps 30 -b 3000000 -n -pf baseline -o - | sudo ./tx -p 0 -r 3 -b 16 -f 1024 wlan1

This creates a transmission on port 0 (-p) with three retransmissions (-r), a retransmission block size of 16 (-b) and packets with the length of 1024 bytes (-f).

So far it looks pretty good. The latency is roughly (not measured) in the range of 100ms and the range is satisfying. I tested it with two TL-WN722N with 3dbi dipoles and had good video quality throughout my apartment. The longest distance covered was 20m with 4 concrete walls in between. I already saw lots of packets dropping but a retransmission rate of 5 fixed that and gave a clear image.

If you want to try that by yourself:

hg clone https://bitbucket.org/befi/wifibroadcast/

If you are using the same wifi cards you might also take a look at https://befinitiv.wordpress.com/2015/02/22/finding-the-right-wifi-dongle-and-patching-its-kernel-driver-and-firmware/ .

What’s next? I guess building a double biquad antenna for the receiver :)

Finding the right WIFI dongle (And patching its kernel driver and firmware)

This post is a follow-up on https://befinitiv.wordpress.com/2015/01/25/true-unidirectional-wifi-broadcasting-of-video-data-for-fpv/ . Here I describe my findings on several WIFI dongles that I have tested.

In the last post I presented my raw sender and raw receiver software. This post focuses on the right hardware to use with that software.

Therefore, I tested the following WIFI adapters:

  • DLINK DWL-G122 (RALINK RT2573)
  • ALFA AWUS036H (REALTEK RTL8187)
  • ALFA AWUS036NHR (REALTEK RTL8192)
  • TP-LINK TL-WN722N (ATHEROS AR9172)
  • My tests focused on tx power and injection rate. I also briefly looked into the receiver sensivity.

    TX power

    Looking into the TX power was an interesting journey. The first thing I did was to google how others change their TX power. I guess most of you know about the wardriving kids changing their regulatory domain to BO for being able to execute “iwconfig wlan0 txpower 30″ to get 30dbm. I tried that same command for lowering my tx output power and noticed no difference at all when looking at the receive strength (RSSI) on a second card. That is quite weird. There should be a noticeable difference between 1dbm and 20dbm. I started looking into the device drivers and noticed that the iwconfig command never reaches the power set function of the cards (this was true for all of the tested cards). I haven’t traced down the position where the call gets lost but I was quite stunned. Should all the 1000 “change your regdomain” websites be wrong about this? As an additional check I hardcoded different tx power values in the drivers. This did change the RSSI! To be honest, I only tested the vanilla drivers. I didn’t check if Kali uses modified drivers. But I found no hints on this.

    Let’s look in detail at the extraordinary claims of the ALFA cards (1W, 2W, …). For example the AWUS036NHR (sold as 2W) only contains a 1W amplifier. I increased the tx power in the kernel driver to 200mw and tested the card in a shielded cage (perfect use case for a micro-wave oven :) ). I saw already lots of package drops. I suspect that the power distribution of the card is too bad to supply this amount of power. One indication is that the supply voltage of the tx amplifier dropped from 5V to 4V during the transmission of a packet. I wouldn’t be surprised if this caused the drops. Using a tx power of 1W nearly all packages got dropped. Just a few made in through the air. I noticed the same behaviour on the AWUS036H – although not as bad as on the NHR.

    The TP-LINK TL-WN722N supports only 100mW output power on the paper. My tests showed that this claim is more or less true. The card shows reliable transmission at 100mW and the RSSI suggested that the output power is in that order of magnitude (compared to the AWUS cards RSSI). Nice!

    I did not bother to hack the Dlink cards driver since this card does not have an external antenna connector.

    Injection rate

    I was surprised to see how the injection rates differ from card to card. I was thinking that the packets are injected at maximum rate (obeying free channel). But this is not at all the case. I did not trace the cause of low injection rates because I’m not motivated to optimize a driver stack in that respect. Following is a list of injection rates of the tested cards. I injected packets with 1024 bytes payload and measured the rate. On the left you see the air data rate, on the right you see the net data throughput. Net means 100% user data.

    len: 1024 bytes
    
    REALTEK RTL8187
    ---------------
    54 mbit/s OFDM: 480p/s ~ 3.9mbit/s
    
    
    REALTEK RTL8192
    ---------------
    54 mbits/s OFDM: 80p/s ~ 0.6mbit/s
    
    
    RALINK RT2573
    -------------
    54 mbit/s OFDM: 2500p/s ~ 20.5mbit/s
    
    
    
    ATHEROS AR9172
    --------------
    11 mbit/s CCK:  830p/s ~ 6.8mbit/s
    24 mbit/s OFDM: 1750p/s ~ 14.3mbit/s
    54 mbit/s OFDM: 2700p/s ~ 22mbit/s
    
    6.5 mbits/s MCS0: 630p/s ~ 5.1mbit/s
    13 mbit/s MCS1:  1100p/s ~ 9mbit/s
    26 mbit/s MCS3: 1800p/s ~ 14.7mbit/s
    52 mbit/s MCS5: 2700p/s ~ 22.1mbit/s
    

    Again, the ALFA cards are pretty much crap. Luckily, the TP-LINK card showed excellent results. Depending on your data rate and packet loss reliability requirements (the retransmission flag of the tx program) you can use these numbers to find the right modulation scheme. Assume you have a video stream with 6mbit/s (enough for h264 HD) you can fit that easily into the MCS3 modulation, even with one retransmission.

    RX sensitivity

    The RX sensitivity of the TP-LINK was the best of all the tested cards. I have no numbers available since all the other cards are already disqualified by their injection rate. I measured it by sending packets at a known rate and looked at the reception rate. Again, the ALFA cards were the worst. The public opinion on these cards is really the opposite of my findings… quite interesting.

    The winner

    The winner of my tests is very clear: TP-LINK TK-WN722N
    It has excellent injection rates and also an acceptable tx power level. But it is not only good for transmission: It supports to deliver frames with wrong FCS to user space. These frames are for example the ones where only a single bit is flipped. Normal cards just drop these frames and there is no way to see them from user space. It is quite obvious that a few flipped bits have lesser effect on a video stream than the loss of a whole packet (8*1024 = 8192 bits). This is exactly what I described in the last post: That the link should behave as much as possible as an analogous link. The feature of this card can be enabled by this command:

    sudo iw dev wlan1 set monitor fcsfail
    

    Another advantage of this card is that its firmware is open: https://github.com/qca/open-ath9k-htc-firmware.git

    Lastly, this card is really cheap: 10€. I’m quite happy that I’ve found it.

    Dirty patch-work

    As said before, to change the tx power of the card a change of the driver is necessary. I patched the driver of the TL-WN722N in the raspberry kernel. This patch should set the power to a constant 20dbm (although I haven’t checked the unit of the tx-power value. This is still TODO). Use the following commands to apply it:

    cd /tmp
    hg clone https://bitbucket.org/befi/wifibroadcast/
    git clone https://github.com/raspberrypi/linux.git
    cd linux
    git checkout fe4a83540ec73dfc298f16f027277355470ea9a0
    git branch wifi_txpower
    git checkout wifi_txpower
    git apply ../wifibroadcast/patches/AR9271/kernel/fixed_channel_power_of_ath9k_to_20dbm.patch
    

    From here on you can just compile the necessary modules the usual way (Refer to http://www.raspberrypi.org/documentation/linux/kernel/building.md )

    Changing the tx rate was a bit more complicated. It seems as if the firmware of the card just takes rate suggestions from the kernel driver. But the actual rate is decided by the firmware. Therefore, I needed to patch that as well. You find a pre-compiled firmware that uses MCS3 (26mbit/s) as an injection rate under patches/AR9271/firmware/htc_9271.fw. Copy this file to /lib/firmware and re-insert the card to use the modified firmware. If you want to compile the firmware with a different injection rate you can take look at the patch that I supplied. The easiest way is to replace the first line in the list with a copy of the wanted injection rate. Instructions for compiling the firmware are given in the firmware repo: https://github.com/qca/open-ath9k-htc-firmware.git

    Conclusion

    The TL-WN722N is my choice for the receiving side. I’m still hesitating to buy a second one for the transmitter. Maybe I should give ALFA a second chance with the AWUS036NHA (https://wikidevi.com/wiki/ALFA_Network_AWUS036NHA ) which uses the same chip-set as the TP-Link. The advantage would be that it adds a 1W power amplifier to the output. However, if it is as bad as the other ALFA cards, I would have thrown away 25€… If you want me to test that card, feel free to donate :)

    (True) Unidirectional Wifi broadcasting of video data for FPV

    This post shows how to broadcast data over 802.11 devices with a true unidirectional data flow. No need to be associated to a network (thus no risk of being disassociated) and no acknowledgements are sent from receiver to transmitter. Please note that this is ongoing work and just a proof of concept.

    ——

     

    My plan for this spring: Fly FPV! There are already a lot devices on the market for transmitting analog video signals over 2.4GHz or 5.8GHz. But to me that always seemed a bit outdated. Of cause, several people thought that too and tried to send their video data over Wifi. A good example for doing so: https://sparkyflight.wordpress.com/2014/02/22/raspberry-pi-camera-latency-testing-part-2/

    “Sparky flight” took a RaspberryPi, encoded the video stream as h264 and sent it over Wifi to a PC. He was able to get a latency of down to 85ms glass-to-glass, which is quite nice! But: All Wifi solutions have the same problem: When you loose your Wifi connection, you are immediately blind. That is of cause not acceptable in a FPV setup. This is where the old analog link has a really big advantage. When you are getting out of range, the signal slowly degrades. You would have still time to react and turn your plane around.

     

    So I thought: Wouldn’t it be possible to archive the same advantage of a slowly degrading link over Wifi? Not out of the box but I’ll show you how.

     

    The basic approach is: The video transmitter sends its data as a true broadcaster into the air. Regardless of who is listening. The receiver listens all the time and when he is in range of the transmitter he’ll receive the data. When he starts getting out of range he’ll eventually receive not all packets but still some. This behaviour is comparable to that of an analog signal path.

    The main problem is that the Wifi standard does not support such a mode. Devices always need to know to whom they are sending their data. This relationship is created by the “Wifi association”. If your PC is associated with your router, both devices know to whom they are talking. One of the reasons for this association is to make the data transfer more reliable. A receiver of a packet always acknowledges the reception to the transmitter. If no acknowledgement has been received, the transmitter has the chance to re-transmit the packet. Once they loose their association, the cannot exchange data anymore.

    The Wifi ad-hoc mode comes pretty close to an unassociated “broadcast style” way of transmitting data. Unfortunately, it seems as if modern 802.11 standards aren’t supporting ad-hoc mode anymore. If you buy a  802.11ac card and put it into ad-hoc mode, it most likely falls down to 11MBPS. It is not required by the standard to support ac rates in ad-hoc mode :(

     

    To solve this issue I wrote two small programs that serve as a raw transmitter and raw receiver. They are pretty much hacked together out of a program called “packetspammer” http://wireless.kernel.org/en/users/Documentation/packetspammer

     

    You find my programs here: https://bitbucket.org/befi/wifibroadcast

     

    After compiling the sources with “make”, you’ll have two programs called “tx” and “rx”. Both take as a mandatory argument a wlan interface that has been put into monitor mode. The tx program reads data over stdin and sends it with a raw 802.11 packet into the air. On the other side the rx program listens on a device and outputs received data to stdout. The packets of the transmitter are recognized by their fake MAC address (13:22:33:44:55:66). The packets only contain a valid 802.11 header (so that they are not rejected by the wifi card). The rest of the packet is filled with raw data.
    Following is an example on how to use the programs:

     

    Receiver:

    sudo ifconfig wlan0 down 
    sudo iwconfig wlan0 mode monitor 
    sudo iwconfig wlan0 channel 1 
    sudo ifconfig wlan0 up
    sudo ./rx mon0

    Transmitter

    
    sudo ifconfig wlan0 down
    sudo iwconfig wlan0 mode monitor
    sudo iwconfig wlan0 channel 1
    sudo iwconfig wlan0 rate 54M
    sudo ifconfig wlan0 up
    sudo ./tx mon0

    Everything you’ll type into the tx shell should now appear on the rx shell. The tx program also supports as parameters the maximum length of the packets and the number of retransmissions. A retransmission rate of 3 for example will cause the transmitter to transmit each packet three times. This increases the chances that the receiver receives one of them correctly. To avoid that the data is being received 3 times each packet contains a 32bit sequence number. If a sequence number is received more than one time the subsequent packages with identical sequence numbers will be ignored. I admid that this type of redundancy is rather stupid. The problem is that most Wifi cards discard packets with wrong FCS (frame check sequence) completely. So the more classical approaches of redundancy (hamming-codes, …) are not so easy to use. There is definetely still some work to do here! Feel free to participate :)

     

    Writing text over a true broadcast-connection is nice. But how about video? Actually it is really simple. GStreamer is a nice tool for this purpose. My test-setup looks as follows:

     

    (usb webcam) <—> (raspberry pi) <—> (wifi dongle)  ~~~~airgap~~~~> (wifi dongle)<—>(PC)

     

    On the raspberry pi I execute:

    gst-launch-1.0 -v v4l2src ! 'video/x-raw, width=640, height=480, framerate=30/1' !  omxh264enc target-bitrate=500000 control-rate=variable periodicty-idr=10 ! h264parse ! fdsink | sudo ./tx -r 2 -f 1024 wlan0

     

    In words: Receive raw video from V4L2, transform it into 640×480 30fps, encode it as h264 with 500kbit/s and increased keyframerate (this helps if packets get dropped), directly write the video data (without container) to stdout. The video data is then piped into the tx program with two retransmissions and a packet length of 1024 bytes.

     

     

     

    And to receive and display it on my PC:

    sudo ./rx wlan0 | gst-launch-1.0 fdsrc ! h264parse ! avdec_h264 ! xvimagesink sync=false

     

    In words: Receive data on interface wlan0 and pipe it into gstreamer. Gstreamer receives the data, parses the raw h264 data, decodes it and displays the video.

     

     

     

     

    The video quality is ok, maybe a bit too low for actual flying. In the settings above the latency is quite ok, maybe between 100 and 200ms. I noticed that when I increased the encoder bitrate that the latency was bigger. But I still need to look into that. I think by using the original raspi-cam it should be possible to archive the ~100ms of the “Sparky Flight” guy.

    Dropped packets turn out to behave as expected. The video image is partly disturbed but continues to run. A rough estimate: Up to a loss of 5-10% of the packets the video should still be usable to rescue your plane. See below an example of a transmitted video with a packet loss rate of approximately 2.5%:

    Unfortunately I wasn’t able to change the power of the transmitted packets. There is a field in the radiotap header which I have set but it seems to be ignored. Otherwise my solution would be perfect for FPV. You could (as a Bolivian of cause ;) ) buy one of these cheap Wifi cards with 1W output power and have an extremely cheap long distance video link (and since this is a true unidirectional link you would only need a single high power card in your plane). You could also use the Raspi to gather and transfer GPS information or battery capacity live. Of cause this would then be realized as a side-channel and written into the image on the receiving device. In contrast to those (shitty) analog devices that write directly onto the transmitted image…

    If you are interested in participating, please share your experiences in the comments, take my code, modify it, improve it :) My gut feeling is that there are only little things left to do for having a true digital (possibly HD) FPV system with low cost equipment.

    My next step: Make some range experiments…

    Electrical GO-Kart

    This post shows how I built the electronics for an electrical Go-Kart

    —-

    Imagine your dad rescued an old mobility scooter from the trash to build his grandsons a Go-Kart. Sounds a bit crazy but that is what my dad is doing right now :) The scooter had disfunctional electronics but the mechanics worked fine. He tossed away everything except for the chassis (seat gone, steering motor gone, handle bars gone). Afterwards he attached a steering wheel to the front wheels so that it felt more like a kart. You can see a picture of the finished mechanics here:

    IMG_20141222_214610

    But there was still a big problem left: How to control the DC-motor of the kart? Connecting it via a switch directly to the 24V battery was out of the question. Too fast and too dangerous for the children. That’s where I joined the project: To build a motor controller for the kart.

    I looked at the original electronics of the scooter and found the H-bridge on a separate PCB. Nice!

    IMG_20141222_214159

    But I did not have schematics for the PCB. So the only way was to reverse engineer the unit. It turned out to be quite an interesting journey. By following the traces on the PCB i was able to draw the most interesting parts of the PCB:

    IMG_20141222_221538

    Here you see the H bridge with the MOSFET drivers. The bridge used NMOS transistors for both high side and low side. So I was curios: How did they manage to switch the high side? The source of the transistor was connected to the motor, the drain connected to +24V. To switch that transistor it would be necessary to have a gate voltage above 24V volts. But there is only a single 24V supply. Hmmm. So I continued to reverse engineer the PCB and found a cascade. That cascade is used to double the 24V to around 40V, which is enough to switch the high side. You can see the schematic of the cascade here:

    IMG_20141222_221550

    The rest was easy: I grabbed an AVR and used a timer to generate the needed square wave (20KHz) for the cascade. Another timer is used to generate two PWM signals for the high sides of the H-bridge. The low sides are just switched on statically via GPIO lines.

    Additionally, I connected the old joystick of the scooter to the AVR. This was also very simple since it had a potentiometer that I could easily read out with an analog input. With it the scooter is able to go forward and backward (half speed only), I also added an electrical brake. Driving around with it is actually quite fun. Maybe I should also look for a tossed away scooter :)

    You can find a video of it driving here:

    Its a bit shaky because the moving kart is changing the throttle on the joystick. But if you sit on it it’s quite smooth.

    The source code is available here: https://bitbucket.org/befi/gokart

    How to make a stereogram video

    First, let’s start with a test. What do you see here (switch to 480p!):

    Well, it’s not a broken antenna on my TV. Probably half of you are able to see the hidden part of it, for every one else it’s just noise. What’s the trick? Simple: Do you remember those random dot images which were quite popular in the 90ies where you had to (more or less) squint to see the 3D content? That’s exactly what you are seeing in the video! It’s me in 3D moving my hands and a sheet of paper around.

    Last week I stumbled over these 3D images, called stereograms, and was curious if this would also work as a 3D video instead of a still image. I wasn’t sure if the eye could follow the 3D impression since the random pattern would change from frame to frame completely. But it works surprisingly well!

    How did I create the video? Well, the first thing you need for this is 3D content. That could be of artificial nature but that would be a bit boring. Luckily, I developed a smart stereo camera for my PhD which directly generates 3D data :) You can see a photo of it here:

     

    viskos

    It’s a neat little device that does the stereo processing right on the device so that it delivers directly 3D data. Alternatively, you could also use something like a Kinect. This would even be a better choice as it delivers much more 3D points than a stereo camera since it is an active sensing device.

    Here you see a color coded version of the camera output: (Note that the quality in this case is really bad. I haven’t had the chance to calibrate the camera and it was also too dark for archiving good quality 3D data…)

    If you have your 3D data ready, you can then transform it into a stereogram. And the code to do that is quite compact. A good reference on how to do that is described in the paper “Displaying 3D images: Algorithms for single-image random-dot stereograms” by Thimbleby et al. 1994. There is even code in that paper!

    That’s all there is to do to create a stereogram video. I found it quite intriguing being able to display 3D video on a 2D screen without any special tools like glasses, etc. Unfortunately I cannot publish the source code for this project since I’ve written it inside an obscure framework that is not publicly available. But you’ll find everything you need in the paper I cited.

    Simple Java script for displaying your webcam images

    Yes, I know what you are thinking. Webcams are soo nineties… But since nowadays every grandpa has his own RasberryPi running at home and can connect a USB cam to it, they regained popularity.

    Recently I installed my first personal webcam and I needed a good way to look at the images using a web browser. The simple approach of just showing the image was quite ugly. Due to the slow upload speed of the webcam, refreshing the image was a pain. The image was flashing and then progessively reappeared on the screen. Quite horrible.

    That’s why I wrote a simple java script that loads the image in the background and updates the screen when loading has finished. I also added a history of the previous images so that I don’t loose something interesting. A checkbox which enables auto refresh of the images works as well. During refresh a loading icon is displayed.

    The script is nothing special but maybe it saves someone a bit of time. You can grab it here:

    https://bitbucket.org/befi/webstuff

    Turn your amplifier on whenever something starts to play on your ALSA soundcard

    One of my first posts showed how to control RC wall sockets with your computer. I use this system at home for switching my lights and my music amplifier (which is connected to the soundcard of the computer which controls the RC wall sockets). I’ve written an Android App for switching everything on and off and there is also the whistle control for those devices. These things are great but it always happened to me that I forgot to switch off the amplifier. So I wrote a script to automate the power switching of the amplifier. It is called turn_on_amplifier_if_audio_starts_playing. Now whenever I am starting a song via MPD or I watch a film with VLC or I simply use the computer as a remote sound card via pulse audio, the amplifier turns on immediately. After the music/whatever is finished, the script waits for 60 seconds and then turns the amplifier off. A nice feature of this is that all the actions are ‘edge’ based. If the sound card goes from idle->active, the amplifier is turned on. If it goes from active->idle, it is turned off. The advantage of this is that when I turn on the amplifier manually for listening to LPs, it stays on. In contrast, ‘state’ based actions would let the computer think ‘hey, I’m not playing anything, I’ll turn off the amplifier’.

    There is also a conf file for upstart which can be placed under /etc/init for turning this script into a service.

    The script works by checking every second the content of the file /proc/asound/card0/pcm0p/sub0/status . If it contains ‘RUNNING’, something is playing on the sound card. I admid, it is reeally hacky. But maybe it is useful to someone. I would much more prefer to have a hook which is called by ALSA whenever the state of the sound card changes. If someone happens to know about something like that, please leave a comment!

    Follow

    Get every new post delivered to your Inbox.