Determining Optimal Transfer Speed with iPerf

In order to optimize the speed of your transfers, it may be helpful to identify the fastest achievable transfer speed between two storage servers on your network connection. Use iPerf to test your maximum bandwidth and speed.

This test should be performed over port 49221, requiring that the Agent services be stopped on both the source and the target. If this is not possible, use a different port. This change may affect your results.

When downloading iPerf, make sure to choose the appropriate version for your operating system and save it to an easily accessible folder.

Source

The Client is the source server from which packets are sent.

iperf3 -c <target> -u -p <port> -t 60 -i 2 -b 1000M

  • -c - designates this as a client system
  • <target> - the IP of the target machine
  • -u specifies UDP transfers (may not be required if not using iPerf3)
  • -p <port> - the port on which to test
  • -t - the length of time to run the test
  • -i - the interval in seconds between bandwidth reports
  • -b - the bandwidth at which to send

Note: In this example, packets are sent at 1000 megabytes per second for 60 seconds, which will flood a gigabit line. In order to get a more realistic test use a transfer time longer than 60 seconds.

Target

The Server is your target machine. Make sure you have an IP that is reachable from the Source server.

iperf3 -s -p <port>

  • -s - designates the server status of the machine
  • -p followed by a number - the port on which the target will listen

Example: iperf3 -s -p 49221

Client Report Example

shasha2b:root [ ~ ] iperf3 -c 10.0.13.125 -p 49221 -t 60 -i 2 -b 1000M
------------------------------------------------------------
Client connecting to 10.0.13.125, UDP port 49221  
Sending 1470 byte datagrams  
UDP buffer size:  126 KByte (default)  
------------------------------------------------------------  
[  3] local 66.48.39.xxx port 37094 connected with 10.0.13.125 port 49221  
[ ID] Interval       Transfer     Bandwidth  
[  3]  0.0- 2.0 sec   167 MBytes   699 Mbits/sec
[  3]  2.0- 4.0 sec   168 MBytes   705 Mbits/sec  
[  3]  4.0- 6.0 sec   172 MBytes   720 Mbits/sec
[  3]  6.0- 8.0 sec   165 MBytes   692 Mbits/sec  
.  
.  
.  
[  3] 54.0-56.0 sec   170 MBytes   711 Mbits/sec  
[  3] 56.0-58.0 sec   173 MBytes   725 Mbits/sec  
[  3] 58.0-60.0 sec   164 MBytes   687 Mbits/sec  
[  3]  0.0-60.0 sec  4.95 GBytes   708 Mbits/sec  
[  3] Sent 3614199 datagrams  
[  3] Server Report:  
[  3]  0.0-60.0 sec  4.63 GBytes   662 Mbits/sec   0.042 ms 235163/3614198 (6.5%)
[  3]  0.0-60.0 sec  1 datagrams received out-of-order

In this example, the bandwidth maxes out at around 700Mb/s even though it was set to transfer at 1000M. This is the maximum that this connection can handle.

The packet loss over the course of the transfer increases to 6.5%. At a bandwidth of 700M the packet loss is 1.5% , at 500M the packet loss is only 0.56%. The test results suggest that a bandwidth just below or at 500M is ideal.

Lowering the bandwidth a little at a time until your packet loss is at or below 1% will help determine your optimal transfer speed.

Note: Signiant transfers will read the packet loss and, within the 1-3% range, throttle the transfer speed in order to provide the best transfer. Consequently, your test should be run down to that packet loss level.