5. November 2011

Throttling Bandwidth with Linux Tools

As a semi-professional admin I do backups of production systems all the time. This means various "tar", "scp", "rsync" and even "lftp" operations depending on what is to be backup-ed.

The problem: Backup interferes with normal operation

Backup is running at the same time as the normal operation of the production system and the system is already quite busy. Copy and transfer tools take as many resources as they can get from CPU, DMA and network. This usually interferes with the normal server operations and can even bring down the server (thank you DMA).

The solution: Throttling

Many of the tools have a throttling option, which can limit the bandwidth in network transfers and file system operations.

scp:
  • Option -l limits the used bandwidth, specified in Kbit/s. 
  • Example for max 5 MByte/sec: scp -l 40000 myfile $user@$host:
rsync:
  • Option --bwlimit limits I/O bandwidth in KBytes per second. 
  • Example for max 5 MByte/sec: rsync --bwlimit 5000 myfile destfolder 
lftp:
  • Setting "net:limit-rate" limits transfer rate on data connection in bytes per second. 
  • Example for max. 5 MByte/sec: echo "set net:limit-rate 5000000; put myfile" | lftp -u $user,$password $host
Anything else, e.g. tar, use the cstream command:
  • cstream can limit throughput of pipes with option -t in bytes per second.
  • Example for max 5 MByte/sec: tar cf - myfolder | cstream -t -5000k > myfolder.tar
  • Use "-5000k" instead of "5000k" to never exceed the max. Otherwise cstream saves throughput it could not use for later, which may be more bursty.
happy_throttling()

Kommentar veröffentlichen