19.9. Measuring Throughput, Jitter, and Packet Loss with iperf

You want to measure throughput on your various network segments, and you want to collect jitter and datagram loss statistics. You might want these just as a routine part of periodically checking your network performance, or you're running a VoIP server like Asterisk, Trixbox, or PBXtra, so you need your network to be in extragood shape to have good call quality.

Use iperf, which is a nifty utility for measuring TCP and UDP performance between two endpoints. It must be installed at both ends of the connection you're measuring; in this example, that is Xena and Penguina. We'll call Xena the server and Penguina the client. First, start iperf on Xena in server mode, then fire it up on Penguina. (The easy way is to do all this on Xena in two X terminals via SSH.)

	carla@xena:~$ iperf -s
	------------------------------------------------------------
	Server listening on TCP port 5001
	TCP window size: 85.3 KByte (default)
	------------------------------------------------------------

	terry@penguina:~$ iperf -c xena
	------------------------------------------------------------
	Client connecting to xena, TCP port 5001
	TCP window size: 16.0 KByte (default)
	------------------------------------------------------------
	[  3] local 192.168.1.76 port 49215 connected with 192.168.1.10 port 5001
	[  3] 0.0-10.0 sec 111 MBytes 92.6 Mbits/sec

And it's done. That's a good clean run, and as fast as you're going to see over Fast Ethernet.

You can conduct a bidirectional test that runs both ways at once:

	terry@penguina:~$ iperf -c xena -d
	------------------------------------------------------------
	Server listening on TCP port 5001
	TCP window size: 85.3 KByte (default)
	------------------------------------------------------------
	------------------------------------------------------------
	Client connecting to xena, TCP port 5001
	TCP window size: 56.4 KByte (default)
	------------------------------------------------------------
	[  5] local 192.168.1.76 port 59823 connected with 192.168.1.10 port 5001
	[  4] local 192.168.1.76 port 5001 connected with 192.168.1.10 port 58665
	[  5] 0.0-10.0 sec 109 MBytes 91.1 Mbits/sec
	[  4] 0.0-10.0 sec 96.0 MBytes 80.5 Mbits/sec

Or, one way at a time:

	$ terry@uberpc:~$ iperf -c xena -r

Compare the two to get an idea of how efficient your Ethernet duplexing is.

Troubleshooting multicasting can drive a network administrator to drink, but fortunately, iperf can help. You'll run iperf in server mode on all of your multicast hosts, and then test all of them at once from a single client:

	admin@host1:~$ iperf -sB 239.0.0.1
	admin@host2:~$ iperf -sB 239.0.0.1
	admin@host3:~$ iperf -sB 239.0.0.1
	carla@xena:~$ iperf -c 239.0.0.1

If you're using multicasting for video or audio streaming, you'll want to test with UDP instead of the default TCP, like this:

	admin@host1:~$ iperf -sBu 239.0.0.1
	admin@host2:~$ iperf -sBu 239.0.0.1
	admin@host3:~$ iperf -sBu 239.0.0.1
	carla@xena:~$ iperf -c 239.0.0.1 -ub 512k

Adjust the -b (bits per second) value to suit your own network, or use -m for megabits. Testing with UDP will generate a number of useful and interesting statistics. If the server is still running, stop it with Ctrl-C, then run this command:

	carla@xena:~$ iperf -su
	-----------------------------------------------------------
	Server listening on UDP port 5001
	Receiving 1470 byte datagrams
	UDP buffer size:   108 KByte (default)
	-----------------------------------------------------------

Then, start the client:

	terry@penguina:~$ iperf -c xena -ub 100m
	------------------------------------------------------------
	Client connecting to xena, UDP port 5001
	Sending 1470 byte datagrams
	UDP buffer size: 108 KByte (default)
	------------------------------------------------------------
	[  3] local 192.168.1.76 port 32774 connected with 192.168.1.10 port 5001
	[  3]  0.0-10.0 sec    114 MBytes  95.7 Mbits/sec
	[  3] Sent 81444 datagrams
	[  3] Server Report:
	[ ID] Interval    Transfer  Bandwidth          Jitter    Lost/Total     Datagrams
	[  3]  0.0-10.0 sec 113 MBytes  94.9 Mbits/sec  0.242 ms  713/81443     (0.88%)
	[  3]  0.0-10.0 sec  1 datagrams received out-of-order

Jitter and datagram loss are two important statistics for streaming media. Jitter over 200 ms is noticeable, like you're driving over a bumpy road, so the 0.242 ms in our test run is excellent. 0.88 percent datagram loss is also insignificant. Depending on the quality of your endpoints, VoIP can tolerate as much as 10 percent datagram loss, though ideally you don't want much over 3–4 percent.

The out-of-order value is also important to streaming media—obviously a bunch of UDP datagrams arriving randomly don't contribute to coherence.

You may adjust the size of the datagrams sent from the client to more closely reflect your real-world conditions. The default is 1,470 bytes, and voice traffic typically runs around 100–360 bytes per datagram (which you could find out for yourself with tcpdump). Set the size in iperf with the -l switch. It looks a bit odd because the available values are kilobytes or megabytes per second only, so we have to use a fractional value:

	terry@uberpc:~$ iperf -c xena -ub 100m -l .3K
	------------------------------------------------------------
	Client connecting to xena, UDP port 5001
	Sending 307 byte datagrams
	UDP buffer size: 108 KByte (default)
	------------------------------------------------------------
	[  3] local 192.168.1.76 port 32775 connected with 192.168.1.10 port 5001
	[  3]  0.0-10.0 sec  98.2 MBytes  82.3 Mbits/sec
	[  3] Sent 335247 datagrams
	[  3] Server Report:
	[ ID] Interval    Transfer  Bandwidth          Jitter    Lost/Total     Datagrams
	[  3]  0.0-10.0 sec   96.9 MBytes  81.2 Mbits/sec 0.006 ms 4430/335246 (1.3%)
	[  3]  0.0-10.0 sec   1 datagrams received out-of-order

These same tests can be run over the Internet. iperf by default uses TCP/UDP port 5001. You can also specify which ports to use with the -p switch.

Link quality is becoming more important as we run more streaming services over packet-switched networks, and service providers are trying to meet these new needs. Talk to your ISP to see what they can do about link quality for your streaming services.