PVE Backup Speed Optimization

bferrell

Well-Known Member
Nov 16, 2018
99
2
48
54
I'm looking to improve my backup (well, and overall VM, but that's a story for another day) speed in PVE. I've looked around, and haven't found anything on optimization, or even what normal or 'good' performance would be.

I have 4 nodes, all on PVE6.1-7, all 12th gen Dell R620/R720 machines with dual 8 core processor and 192GB of ram. They have 3 active network connections, the bridge is 10G on 192.168.100.0/24 for the guests, it has a 10G on a SAN subnet 192.168.101.0/24 where there are 3 FreeNAS hosts with NFS shares for Images, ISOs, and Backups, and a 1G CoroSync on 192.168.102.0/24.

When I copy a file from a guest VM to a Samba share on FreeNAS I get about 150 MB/s, but when I copy from my MacOS physical machines I get nearly 400MB/s, so I'd like improve that as well. The VM image is stored on FreeNAS as well, but I would've thought that the transfer would go to the host's physical memory before going back out to the NFS share (it's only 5GB), so I wouldn't have expected it to be so bottlenecked, but maybe that was unreasonable.

Some performance testing information. I'm wondering if my expectations are off, or where I should start looking to tune this setup. Thanks for any advice. I'll post testing results as replies as it doesn't like the post to be too long.

UPDATE: I included a bunch of network details in this other posting, but if you follow along the details below, I no longer believe this is a network or storage issue. This really appears to be some bottleneck in the vzdump process to me.
 
Last edited:
iperf3 between the new FreeNAS 11.3U1 host (a R720XD with 192GB ram and dual 8 core CPU, a dozen 10TB drives in 4 VDEV with 3 each, OS on 120GB SSD) and node 4 is getting 9Gbps, so the host (SAN) network is up to the task.

Code:
root@freenas[~] (192.168.101.104)# iperf3 -c 192.168.101.14
Connecting to host 192.168.101.14 (NODE 4), port 5201
[  5] local 192.168.101.104 port 45808 connected to 192.168.101.14 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.08 GBytes  9.24 Gbits/sec    0   2.98 MBytes
[  5]   1.00-2.00   sec  1.10 GBytes  9.42 Gbits/sec    0   2.99 MBytes
[  5]   2.00-3.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   3.00-4.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   4.00-5.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   5.00-6.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   6.00-7.00   sec  1.10 GBytes  9.42 Gbits/sec    0   2.99 MBytes
[  5]   7.00-8.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   8.00-9.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   9.00-10.00  sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.9 GBytes  9.40 Gbits/sec    0             sender
[  5]   0.00-10.41  sec  10.9 GBytes  9.02 Gbits/sec                  receiver
 
iperf3 between the new FreeNAS and host on the bridge () network
Code:
root@freenas[~]# iperf3 -c 192.168.100.14
Connecting to host 192.168.100.14, port 5201
[  5] local 192.168.101.104 port 58164 connected to 192.168.100.14 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   934 MBytes  7.82 Gbits/sec  158    285 KBytes
[  5]   1.00-2.00   sec  1.08 GBytes  9.27 Gbits/sec   61    473 KBytes
[  5]   2.00-3.00   sec  1.07 GBytes  9.22 Gbits/sec   89    455 KBytes
[  5]   3.00-4.00   sec  1.07 GBytes  9.17 Gbits/sec   37    439 KBytes
[  5]   4.00-5.00   sec  1.05 GBytes  9.05 Gbits/sec  124    486 KBytes
[  5]   5.00-6.00   sec  1.08 GBytes  9.29 Gbits/sec   99    568 KBytes
[  5]   6.00-7.00   sec  1.03 GBytes  8.88 Gbits/sec   62    329 KBytes
[  5]   7.00-8.00   sec  1.07 GBytes  9.24 Gbits/sec  112    392 KBytes
[  5]   8.00-9.00   sec  1.08 GBytes  9.27 Gbits/sec  143    451 KBytes
[  5]   9.00-10.00  sec  1.08 GBytes  9.25 Gbits/sec  471    269 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.5 GBytes  9.05 Gbits/sec  1356             sender
[  5]   0.00-10.41  sec  10.5 GBytes  8.68 Gbits/sec                  receiver
 
Backup from node 4 to new FreeNAS peak and stay a about 100Mbps. Note that the VM is on FreeNAS host #2, and the backup is going to FreeNAS host #4, so same network but didn't boxes.
Code:
()
INFO: starting new backup job: vzdump 505 --storage FN4_Backup --remove 0 --mode snapshot --compress lzo --node svr-04
INFO: Starting Backup of VM 505 (qemu)
INFO: Backup started at 2020-03-06 13:25:04
INFO: status = running
INFO: update VM 505: -lock backup
INFO: VM Name: guacamole
INFO: include disk 'scsi0' 'FN2_IMAGES:505/vm-505-disk-0.qcow2' 100G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/FN4_Backup/dump/vzdump-qemu-505-2020_03_06-13_25_03.vma.lzo'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '6a47c484-8856-4fd2-845c-57a23ded9ad4'
INFO: status: 0% (237502464/107374182400), sparse 0% (104325120), duration 3, read/write 79/44 MB/s
INFO: status: 1% (1083375616/107374182400), sparse 0% (277643264), duration 17, read/write 60/48 MB/s
INFO: status: 2% (2158559232/107374182400), sparse 0% (1020219392), duration 29, read/write 89/27 MB/s
INFO: status: 3% (3237281792/107374182400), sparse 1% (2058776576), duration 38, read/write 119/4 MB/s
INFO: status: 4% (4382195712/107374182400), sparse 2% (3188822016), duration 48, read/write 114/1 MB/s
INFO: status: 5% (5453447168/107374182400), sparse 3% (4259876864), duration 57, read/write 119/0 MB/s
INFO: status: 6% (6491209728/107374182400), sparse 4% (4918677504), duration 71, read/write 74/27 MB/s
INFO: status: 7% (7530020864/107374182400), sparse 4% (5089689600), duration 90, read/write 54/45 MB/s
INFO: status: 8% (8631418880/107374182400), sparse 4% (5222445056), duration 110, read/write 55/48 MB/s
INFO: status: 9% (9670033408/107374182400), sparse 4% (5348052992), duration 131, read/write 49/43 MB/s
INFO: status: 10% (10776150016/107374182400), sparse 5% (5569011712), duration 151, read/write 55/44 MB/s
INFO: status: 11% (11817648128/107374182400), sparse 5% (5692731392), duration 170, read/write 54/48 MB/s
INFO: status: 12% (12907446272/107374182400), sparse 5% (5995233280), duration 187, read/write 64/46 MB/s
INFO: status: 13% (14021492736/107374182400), sparse 6% (6751797248), duration 199, read/write 92/29 MB/s
INFO: status: 14% (15103557632/107374182400), sparse 7% (7825182720), duration 207, read/write 135/1 MB/s
INFO: status: 15% (16149118976/107374182400), sparse 8% (8834752512), duration 216, read/write 116/3 MB/s
...
INFO: status: 100% (107374182400/107374182400), sparse 90% (97393500160), duration 998, read/write 129/0 MB/s
INFO: transferred 107374 MB in 998 seconds (107 MB/s)
INFO: archive file size: 5.24GB
INFO: Finished Backup of VM 505 (00:16:42)
INFO: Backup finished at 2020-03-06 13:41:45
INFO: Backup job finished successfully
TASK OK
 
I get 10g between the VMs on a host
Code:
bferrell@plex:~$ iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.100.124, port 46588
[  5] local 192.168.100.51 port 5201 connected to 192.168.100.124 port 46590
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   622 MBytes  5.21 Gbits/sec                  
[  5]   1.00-2.00   sec   875 MBytes  7.34 Gbits/sec                  
[  5]   2.00-3.00   sec   868 MBytes  7.29 Gbits/sec                  
[  5]   3.00-4.00   sec   972 MBytes  8.15 Gbits/sec                  
[  5]   4.00-5.00   sec  1018 MBytes  8.54 Gbits/sec                  
[  5]   5.00-6.00   sec  1.02 GBytes  8.72 Gbits/sec                  
[  5]   6.00-7.00   sec  1.18 GBytes  10.1 Gbits/sec                  
[  5]   7.00-8.00   sec  1.07 GBytes  9.19 Gbits/sec                  
[  5]   8.00-9.00   sec  1.66 GBytes  14.2 Gbits/sec                  
[  5]   9.00-10.00  sec  1.93 GBytes  16.6 Gbits/sec                  
[  5]  10.00-10.04  sec  61.8 MBytes  14.2 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  5]   0.00-10.04  sec  11.2 GBytes  9.56 Gbits/sec    7             sender
[  5]   0.00-10.04  sec  11.2 GBytes  9.56 Gbits/sec                  receiver





But I only get 2Gbps between VM and the FreeNAS box. I'm not sure why this is slow low, and although probably not a factor for backup performance, will be for Plex and OS level read/write. This seems really low for iperf.
Code:
root@freenas[~]# iperf3 -c 192.168.100.51
Connecting to host 192.168.100.51, port 5201
[  5] local 192.168.101.104 port 36547 connected to 192.168.100.51 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   202 MBytes  1.70 Gbits/sec    3   74.1 KBytes
[  5]   1.00-2.00   sec   208 MBytes  1.75 Gbits/sec    3    107 KBytes
[  5]   2.00-3.00   sec   205 MBytes  1.72 Gbits/sec    3   99.8 KBytes
[  5]   3.00-4.00   sec   223 MBytes  1.87 Gbits/sec    3    120 KBytes
[  5]   4.00-5.00   sec   236 MBytes  1.98 Gbits/sec    2    127 KBytes
[  5]   5.00-6.00   sec   194 MBytes  1.63 Gbits/sec    3   75.5 KBytes
[  5]   6.00-7.00   sec   239 MBytes  2.00 Gbits/sec    3    124 KBytes
[  5]   7.00-8.00   sec   196 MBytes  1.64 Gbits/sec    2    114 KBytes
[  5]   8.00-9.00   sec   251 MBytes  2.11 Gbits/sec    3    101 KBytes
[  5]   9.00-10.00  sec   214 MBytes  1.80 Gbits/sec    3    127 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  2.12 GBytes  1.82 Gbits/sec   28             sender
[  5]   0.00-10.00  sec  2.12 GBytes  1.82 Gbits/sec                  receiver





And I can saturate the COROSYNC networkas well
Code:
root@svr-03:~# iperf3 -c 192.168.102.14
Connecting to host 192.168.102.14, port 5201
[  5] local 192.168.102.13 port 36786 connected to 192.168.102.14 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   114 MBytes   957 Mbits/sec   18    355 KBytes
[  5]   1.00-2.00   sec   112 MBytes   939 Mbits/sec    0    355 KBytes
[  5]   2.00-3.00   sec   112 MBytes   939 Mbits/sec   35    351 KBytes
[  5]   3.00-4.00   sec   113 MBytes   945 Mbits/sec   39    362 KBytes
[  5]   4.00-5.00   sec   112 MBytes   939 Mbits/sec   21    366 KBytes
[  5]   5.00-6.00   sec   112 MBytes   940 Mbits/sec  108    314 KBytes
[  5]   6.00-7.00   sec   112 MBytes   944 Mbits/sec   78    209 KBytes
[  5]   7.00-8.00   sec   112 MBytes   939 Mbits/sec    0    365 KBytes
[  5]   8.00-9.00   sec   112 MBytes   939 Mbits/sec    0    365 KBytes
[  5]   9.00-10.00  sec   113 MBytes   946 Mbits/sec    0    366 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.10 GBytes   943 Mbits/sec  299             sender
[  5]   0.00-10.00  sec  1.10 GBytes   941 Mbits/sec                  receiver
 
UPDATE: Somebody on Reddit tipped me to try ionice in vzdump. I just set it to 2 with little affect. The previous backup too 16:42, this backup 16:03. I guess I could look at the local temp directory next. If I do that, does it need enough space for the entire VM, or will it start to spool it (I have one NextCloud VM which is quite large)?
 
UPDATE: I just found the migrate option in datacenter.cfg and set the migration to be on my 10G host network and now migrations are awesome (2020-03-06 23:36:34 migration speed: 819.20 MB/s - downtime 594 ms), so I think I just need to add a default route somewhere for the SAN traffic flow. I'll be looking into that next.

Well, maybe I spoke too soon. Subsquent tests are under 100, so I'm not sure what's going on there. Iperf3 is always maxed out. Here are tests of all 3 networks from FreeNAS server #4 to cluster node 4. It has a 10G connection at 192.168.101.104.

192.168.100.0/24-Main cluster bridge network
192.168.101.0/24-Cluster SAN network to FreeNAS boxes
192.168.102.0/24-Cluster CoroSYNC network

Code:
root@freenas[~]# iperf3 -c 192.168.100.14
Connecting to host 192.168.100.14, port 5201
[  5] local 192.168.101.104 port 31060 connected to 192.168.100.14 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   853 MBytes  7.15 Gbits/sec   79    228 KBytes
[  5]   1.00-2.00   sec   975 MBytes  8.18 Gbits/sec   85    192 KBytes
[  5]   2.00-3.00   sec  1.03 GBytes  8.83 Gbits/sec  113    232 KBytes
[  5]   3.00-4.00   sec  1.06 GBytes  9.11 Gbits/sec   13    630 KBytes
[  5]   4.00-5.00   sec  1.05 GBytes  8.99 Gbits/sec   36    189 KBytes
[  5]   5.00-6.00   sec  1.03 GBytes  8.87 Gbits/sec  103    148 KBytes
[  5]   6.00-7.00   sec  1023 MBytes  8.58 Gbits/sec  137    192 KBytes
[  5]   7.00-8.00   sec  1002 MBytes  8.40 Gbits/sec   26    234 KBytes
[  5]   8.00-9.00   sec   998 MBytes  8.37 Gbits/sec  141    205 KBytes
[  5]   9.00-10.00  sec  1.05 GBytes  9.05 Gbits/sec    5    442 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  9.96 GBytes  8.55 Gbits/sec  738             sender
[  5]   0.00-10.41  sec  9.96 GBytes  8.22 Gbits/sec                  receiver

iperf Done.
root@freenas[~]# iperf3 -c 192.168.101.14
Connecting to host 192.168.101.14, port 5201
[  5] local 192.168.101.104 port 25615 connected to 192.168.101.14 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.09 GBytes  9.40 Gbits/sec    0   2.98 MBytes
[  5]   1.00-2.00   sec  1.08 GBytes  9.31 Gbits/sec    0   2.99 MBytes
[  5]   2.00-3.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   3.00-4.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   4.00-5.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   5.00-6.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   6.00-7.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   7.00-8.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   8.00-9.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
[  5]   9.00-10.00  sec  1.10 GBytes  9.41 Gbits/sec    0   2.99 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.9 GBytes  9.40 Gbits/sec    0             sender
[  5]   0.00-10.41  sec  10.9 GBytes  9.03 Gbits/sec                  receiver

iperf Done.
root@freenas[~]# iperf3 -c 192.168.102.14
Connecting to host 192.168.102.14, port 5201
[  5] local 192.168.101.104 port 40628 connected to 192.168.102.14 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   113 MBytes   945 Mbits/sec  351    127 KBytes
[  5]   1.00-2.00   sec   112 MBytes   938 Mbits/sec  141    190 KBytes
[  5]   2.00-3.00   sec   112 MBytes   937 Mbits/sec  186    143 KBytes
[  5]   3.00-4.00   sec   112 MBytes   937 Mbits/sec  139    204 KBytes
[  5]   4.00-5.00   sec   112 MBytes   937 Mbits/sec  181    158 KBytes
[  5]   5.00-6.00   sec   112 MBytes   937 Mbits/sec  138    219 KBytes
[  5]   6.00-7.00   sec   111 MBytes   931 Mbits/sec  131    208 KBytes
[  5]   7.00-8.00   sec   112 MBytes   937 Mbits/sec  182    162 KBytes
[  5]   8.00-9.00   sec   112 MBytes   936 Mbits/sec  140    215 KBytes
[  5]   9.00-10.00  sec   111 MBytes   934 Mbits/sec  182    175 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   937 Mbits/sec  1771             sender
[  5]   0.00-10.41  sec  1.09 GBytes   899 Mbits/sec                  receiver

iperf Done.
 
Last edited:
Nope, my migrations are very fast now, on the whole, so that's good. I also added all 3 networks to my CoroSYNC config, and added teh cluster 10G network at ring0 just in case that was slowing things down and (after a brief scare where I boofed the config) that's working, but backup and restore are still slower than seems reasonable.
 
So, I've tried IONOICE, BWLIMIT, and local disk, and I can't tell that any of these are getting me faster backups. Is someone actually doing better than this, and if so what settings did you use?
 
I saw in another thread that this might be helpful. Currently running a backup. THis VM is doing nothing, and is currently the only VM running on this node (R720, dual 8 core, 192GB ram, dual 10G copper broadcom ethernet). Can't tell if these are good or bad, but the second set seems to say a lot of idle time is wasted.

Code:
root@svr-04:~# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0  90380 989808  37052 191540208    0    0     9     1   30   97  3  2 96  0  0
 1  0  90380 945840  37052 191603584    0    0  2560     0 10500 17022  1  2 96  0  0
 2  0  90380 881072  37060 191668080    0    0     0    40 9036 15816  1  2 97  0  0
 0  0  90380 817756  37060 191732864    0    0     0     0 8897 15387  1  2 97  0  0
 0  1  90380 787420  37060 191761280    0    0     0     0 28122 34967  1  3 95  1  0
 0  0  90380 753564  37060 191805536    0    0     0     0 9156 16790  1  2 97  0  0
 1  0  90380 709088  37060 191849584    0    0     0     0 8654 15073  1  2 98  0  0
 3  0  90380 994628  37056 191562816    0  116     0   136 8965 15513  1  2 97  0  0
 0  0  90380 937648  37056 191617056    0    0     0     0 9179 15549  1  1 98  0  0
 4  0  90380 852984  37056 191674432    4    0     4     0 22453 28583  1  4 95  0  0
 0  0  90380 795736  37056 191737952    0    0     0     0 10840 20587  1  2 97  0  0
 2  0  90380 749376  37056 191801504    0    0  2560     0 10022 17327  1  2 97  0  0
 1  0  90380 692588  37064 191861472    0    0     0    32 10000 17302  1  2 97  0  0
 0  0  92684 954856  37052 191597760    0 2156     0  2156 9472 15342  1  2 97  0  0
17  1  92684 891892  37052 191648736    0    0     0     0 25317 30429  1  4 95  1  0
 1  0  92684 841144  37052 191712688    0    0     0     0 11462 20319  1  2 97  0  0
 0  0  92684 778532  37052 191775888    0    0     0     0 8593 14763  1  2 97  0  0
 0  0  92684 711148  37060 191839808    4    0     4    16 8807 15707  1  2 98  0  0
 1  0  92940 976892  37048 191575344    0  120     0   120 9301 16315  1  3 96  0  0
 2  1  92940 890520  37048 191629056    0    0  2560     0 20187 26844  1  4 95  0  0
 1  0  92940 840436  37048 191687584    0    0     0     0 18624 26428  1  3 96  0  0
 2  1  92940 802368  37048 191747152    0    0     0     0 10035 16780  1  2 97  0  0
 0  0  92940 739904  37056 191810224    0    0     0    32 9423 16348  1  2 98  0  0
 2  1  92940 994916  37044 191546752    0  112     0   992 17178 25056  1  4 96  0  0
 1  0  92940 963692  37044 191589856    0    0     0   120 14353 19515  1  2 96  1  0
 1  0  92940 898904  37044 191652720    0    0     0     0 9203 14601  1  1 98  0  0
 0  0  92940 839208  37044 191712704    0    0     0     0 8891 14100  1  2 98  0  0
 0  0  92940 775320  37052 191772896    0    0     0    40 8745 13900  1  2 97  0  0
 1  0  92940 730268  37052 191811984    0    0   640     0 8243 13533  1  2 97  0  0
 2  0  92940 756388  37048 191769424    0    8  1920     8 24544 33947  1  4 95  0  0
 2  0  92940 950320  37040 191572288    0  104     0   104 8784 14517  1  2 97  0  0
 1  0  92940 908808  37040 191637376    0    0     0     0 9577 16456  1  2 97  0  0
 0  0  92940 840812  37048 191704080    0    0     0    28 9612 15610  1  2 97  0  0
 0  0  92940 774208  37048 191769728    0    0     0     0 9669 14758  1  2 97  0  0
 1  0  92940 732424  37048 191820624    0    0     0     0 26318 36746  1  4 95  1  0
 0  0  92940 1005748  37036 191546336    4  100     4   100 8937 13825  1  2 97  0  0

and

Code:
root@svr-04:~# iostat -kxz 1
Linux 5.3.13-3-pve (svr-04)     03/08/2020      _x86_64_        (32 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.53    0.00    1.55    0.05    0.00   95.87

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              2.41    2.62    183.47     50.47     0.37     2.70  13.26  50.70    0.26    0.22   0.00    76.10    19.25   1.19   0.60
sdb              0.81    0.00    102.82      0.00     0.00     0.00   0.00   0.00    0.44    0.00   0.00   126.65     0.00   2.25   0.18
dm-0             0.26    1.76      1.22      7.04     0.00     0.00   0.00   0.00    0.13    0.22   0.00     4.74     4.00   0.60   0.12
dm-1             1.31    3.56     27.84     44.24     0.00     0.00   0.00   0.00    0.59    0.12   0.00    21.19    12.44   0.55   0.27
dm-2             0.00    0.00      0.02      0.00     0.00     0.00   0.00   0.00    0.24    0.00   0.00     4.18     4.00   3.47   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.32    0.00    2.29    0.00    0.00   96.39

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda             13.00    0.00   1540.00      0.00     0.00     0.00   0.00   0.00    0.08    0.00   0.00   118.46     0.00   1.85   2.40
sdb              8.00    0.00   1024.00      0.00     0.00     0.00   0.00   0.00    0.12    0.00   0.00   128.00     0.00   2.00   1.60
dm-1             1.00    0.00      4.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     4.00     0.00   4.00   0.40
 
Here's another piece of data. This is to a SMB share on the FreeNAS, but it shows the server can feed the data to a guest VM. My Mac can copy to this share at nearly 500MB/s. I'll try to map the NFS share to see what it's theoretical performance limit is.

copy_smb.jp.JPG
 
Update - OK, the FreeNAS NFS share can do at least a sustained 200MB/s read - again, this is a guest VM mapped to the NFS share on FreeNAS reading at over 200, so why can't backups do at least this well?

Also, here's iperf3 from the guest VM (Ubuntu 16.04 Plex server) to one of the FreeNAS boxes at basically 10G speeds. There really has to be some way to optimize the backup/restore process. Is anybody doing better than ~100MB/s? Ideas?


Code:
bferrell@plex:/mnt/nfs/test$ iperf3 -c 192.168.101.101
Connecting to host 192.168.101.101, port 5201
[  4] local 192.168.101.51 port 38028 connected to 192.168.101.101 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   910 MBytes  7.63 Gbits/sec  294   1.25 MBytes      
[  4]   1.00-2.00   sec  1.09 GBytes  9.40 Gbits/sec    2   1.37 MBytes      
[  4]   2.00-3.00   sec  1.09 GBytes  9.35 Gbits/sec    2   1.54 MBytes      
[  4]   3.00-4.00   sec  1.09 GBytes  9.34 Gbits/sec    0   2.01 MBytes      
[  4]   4.00-5.00   sec  1012 MBytes  8.48 Gbits/sec  166   1.68 MBytes      
[  4]   5.00-6.00   sec   964 MBytes  8.09 Gbits/sec    2   1.19 MBytes      
[  4]   6.00-7.00   sec  1.06 GBytes  9.10 Gbits/sec    0   1.71 MBytes      
[  4]   7.00-8.00   sec  1.10 GBytes  9.41 Gbits/sec    0   2.12 MBytes      
[  4]   8.00-9.00   sec  1.09 GBytes  9.41 Gbits/sec    1   1.92 MBytes      
[  4]   9.00-10.00  sec  1.06 GBytes  9.10 Gbits/sec    0   2.22 MBytes      
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  10.4 GBytes  8.93 Gbits/sec  467             sender
[  4]   0.00-10.00  sec  10.4 GBytes  8.93 Gbits/sec                  receiver

iperf Done.


copy_nfs.JPG
 
Thanks, but my understanding is that pigz only comes into play if you pick gzip compression, but I've selected no compression or lzo/FAST.
Nonetheless, I have tried the pigz configuration, just to be sure, and it does not affect my speed.

No compression is a little faster, but not much. I also tried ionice at 10 with a peak speed of 109 MB/s, so it appears that there is some bottleneck around 110 MB/s.

No Compression
LZO
gzip
speed (peak)

All run with options in VZDUMP
tmpdir: /tmp
bwlimit: 9000000
ionice: 0
pigz: 16
105 MB/s104 MB/s103 MB/s

This run is with pigz=16 (lzo/fast compression selected)
Code:
INFO: starting new backup job: vzdump 505 --node svr-04 --compress lzo --storage FN2_Backup --mode snapshot --remove 0
INFO: Starting Backup of VM 505 (qemu)
INFO: Backup started at 2020-03-10 09:20:59
INFO: status = running
INFO: update VM 505: -lock backup
INFO: VM Name: guacamole
INFO: include disk 'scsi0' 'FN2_IMAGES:505/vm-505-disk-0.qcow2' 100G
INFO: backup mode: snapshot
INFO: bandwidth limit: 9000000 KB/s
INFO: ionice priority: 0
INFO: creating archive '/mnt/pve/FN2_Backup/dump/vzdump-qemu-505-2020_03_10-09_20_59.vma.lzo'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'd2f39638-de25-4182-8363-ac735dd6d1d8'
INFO: status: 0% (205324288/107374182400), sparse 0% (105259008), duration 3, read/write 68/33 MB/s
INFO: status: 1% (1095041024/107374182400), sparse 0% (257196032), duration 19, read/write 55/46 MB/s
INFO: status: 2% (2210856960/107374182400), sparse 0% (1035227136), duration 33, read/write 79/24 MB/s
INFO: status: 3% (3257335808/107374182400), sparse 1% (2056888320), duration 43, read/write 104/2 MB/s
INFO: status: 4% (4345430016/107374182400), sparse 2% (3130179584), duration 54, read/write 98/1 MB/s
INFO: status: 5% (5386010624/107374182400), sparse 3% (4170760192), duration 64, read/write 104/0 MB/s
INFO: status: 6% (6511198208/107374182400), sparse 4% (4916920320), duration 79, read/write 75/25 MB/s

This run with pigz-16, no compression
Code:
()
INFO: starting new backup job: vzdump 505 --mode snapshot --remove 0 --node svr-04 --compress 0 --storage FN2_Backup
INFO: Starting Backup of VM 505 (qemu)
INFO: Backup started at 2020-03-10 09:23:40
INFO: status = running
INFO: update VM 505: -lock backup
INFO: VM Name: guacamole
INFO: include disk 'scsi0' 'FN2_IMAGES:505/vm-505-disk-0.qcow2' 100G
INFO: backup mode: snapshot
INFO: bandwidth limit: 9000000 KB/s
INFO: ionice priority: 0
INFO: creating archive '/mnt/pve/FN2_Backup/dump/vzdump-qemu-505-2020_03_10-09_23_40.vma'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'ae3e5716-4009-44a2-a2b8-b10e77ef4d53'
INFO: status: 0% (311230464/107374182400), sparse 0% (107986944), duration 3, read/write 103/67 MB/s
INFO: status: 1% (1080098816/107374182400), sparse 0% (255000576), duration 12, read/write 85/69 MB/s
INFO: status: 2% (2191851520/107374182400), sparse 0% (1023336448), duration 23, read/write 101/31 MB/s
INFO: status: 3% (3236823040/107374182400), sparse 1% (2036310016), duration 33, read/write 104/3 MB/s
INFO: status: 4% (4320722944/107374182400), sparse 2% (3115761664), duration 44, read/write 98/0 MB/s
INFO: status: 5% (5380702208/107374182400), sparse 3% (4165386240), duration 54, read/write 105/1 MB/s
INFO: status: 6% (6468009984/107374182400), sparse 4% (4885278720), duration 65, read/write 98/33 MB/s
INFO: status: 7% (7552040960/107374182400), sparse 4% (5071831040), duration 79, read/write 77/64 MB/s
INFO: status: 8% (8654815232/107374182400), sparse 4% (5200510976), duration 92, read/write 84/74 MB/s
INFO: status: 9% (9722789888/107374182400), sparse 4% (5328150528), duration 105, read/write 82/72 MB/s
INFO: status: 10% (10750984192/107374182400), sparse 5% (5522735104), duration 118, read/write 79/64 MB/s
INFO: status: 11% (11856314368/107374182400), sparse 5% (5669130240), duration 132, read/write 78/68 MB/s
INFO: status: 12% (12945784832/107374182400), sparse 5% (5985644544), duration 145, read/write 83/59 MB/s
INFO: status: 13% (13974437888/107374182400), sparse 6% (6729969664), duration 156, read/write 93/25 MB/s
INFO: status: 14% (15130034176/107374182400), sparse 7% (7876886528), duration 167, read/write 105/0 MB/s
INFO: status: 15% (16172515328/107374182400), sparse 8% (8883376128), duration 177, read/write 104/3 MB/s
INFO: status: 16% (17227382784/107374182400), sparse 9% (9930174464), duration 187, read/write 105/0 MB/s

This run is with gzip compression selected
Code:
()
INFO: starting new backup job: vzdump 505 --remove 0 --mode snapshot --storage FN2_Backup --compress gzip --node svr-04
INFO: Starting Backup of VM 505 (qemu)
INFO: Backup started at 2020-03-10 09:27:27
INFO: status = running
INFO: update VM 505: -lock backup
INFO: VM Name: guacamole
INFO: include disk 'scsi0' 'FN2_IMAGES:505/vm-505-disk-0.qcow2' 100G
INFO: backup mode: snapshot
INFO: bandwidth limit: 9000000 KB/s
INFO: ionice priority: 0
INFO: creating archive '/mnt/pve/FN2_Backup/dump/vzdump-qemu-505-2020_03_10-09_27_27.vma.gz'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '4d331750-9dda-47d6-b708-604a537de2f7'
INFO: status: 0% (233570304/107374182400), sparse 0% (105795584), duration 3, read/write 77/42 MB/s
INFO: status: 1% (1075773440/107374182400), sparse 0% (254181376), duration 18, read/write 56/46 MB/s
INFO: status: 2% (2148794368/107374182400), sparse 0% (989220864), duration 31, read/write 82/25 MB/s
INFO: status: 3% (3323527168/107374182400), sparse 1% (2123210752), duration 43, read/write 97/3 MB/s
INFO: status: 4% (4346216448/107374182400), sparse 2% (3131097088), duration 53, read/write 102/1 MB/s
INFO: status: 5% (5381947392/107374182400), sparse 3% (4166696960), duration 63, read/write 103/0 MB/s
INFO: status: 6% (6444810240/107374182400), sparse 4% (4863004672), duration 76, read/write 81/28 MB/s
 
Last edited:
Another tidbit, I'm not sure why the VM above (NextCloud) was so slow, but this guacamole (Apache VNC) VM is getting 10G directly to the FreeNAS box, crossing the 101<->102 subnets. (192.168.100.124->192.168.101.104)

Code:
bferrell@freshguac:~$ iperf3 -c 192.168.101.104
Connecting to host 192.168.101.104, port 5201
[  4] local 192.168.100.124 port 59536 connected to 192.168.101.104 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1019 MBytes  8.54 Gbits/sec  736    977 KBytes       
[  4]   1.00-2.00   sec  1.07 GBytes  9.18 Gbits/sec  284   1.06 MBytes       
[  4]   2.00-3.00   sec  1.03 GBytes  8.88 Gbits/sec  444    656 KBytes       
[  4]   3.00-4.00   sec  1.06 GBytes  9.11 Gbits/sec  118    967 KBytes       
[  4]   4.00-5.00   sec  1.07 GBytes  9.19 Gbits/sec  147   1.35 MBytes       
[  4]   5.00-6.00   sec  1.07 GBytes  9.17 Gbits/sec   97   1.12 MBytes       
[  4]   6.00-7.00   sec  1.00 GBytes  8.62 Gbits/sec  662    655 KBytes       
[  4]   7.00-8.00   sec  1.02 GBytes  8.78 Gbits/sec  349    918 KBytes       
[  4]   8.00-9.00   sec   985 MBytes  8.27 Gbits/sec  379    775 KBytes       
[  4]   9.00-10.00  sec  1.02 GBytes  8.76 Gbits/sec  120    928 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  10.3 GBytes  8.85 Gbits/sec  3336             sender
[  4]   0.00-10.00  sec  10.3 GBytes  8.85 Gbits/sec                  receiver

iperf Done.
 
In all your backup logs is shown, that writing your backup is actually fast, but reading is not (the second last number). Maybe you're investigating into the wrong direction? You can only write backups fast if you can read them fast. In my backup logs, I see much faster read rates (>300 MB/sec), but I have a non-ethernet based storage network and in my setup the 1 GBE ethernet is the bottleneck.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!