@LnxBil - Well, I would disagree that 100MB/s is fast - with my 10G network I would expect at least double that, maybe better...
But, maybe I'm looking at the wrong thing. It's not at all clear why I would see this asymmetry. The Images are stored on this same SAN network as the backups are going to for 1, and secondarily I would've expected (maybe wrongly) that the running image was stored in ram and wouldn't be read from the local HOST disk nor the Image store (on NFS) before backup (to NFS). These are all enterprise class (12gen Dell T620/720) servers with lots of ram (192G) and some local storage, but all of the VM data (images and backups) is on FreeNAS for HA functionality.
Also, the migrate function reports MUCH higher throughputs (see above - it's often over 800MB/s), which I would expect to be pulling from and writing to the FreeNAS Images store for both Nodes if that's the way VZDUMP works. And, as I show above, the GUEST VM can copy at over 200MB/s, and that's reading and writing to the FreeNAS shares simultaneously for sure.
So, as a test, I put the backup on different FreeNAS store so that the reads (Images are on FreeNAS #2) and writes (Backups are going to FreeNAS #1) are going to different NFS shares on the 10G SAN (101) network.
Incredibly at some points I'm seeting 0 KB/s read speeds, which I assume means it didn't need to read much, because I can't imagine such a low number, or why it wouldn't just fail? Clearly there's a fair amount about this I don't understand.
FN=FreeNAS host 191.168.101.101
FN=FreeNAS host 191.168.101.102
But, maybe I'm looking at the wrong thing. It's not at all clear why I would see this asymmetry. The Images are stored on this same SAN network as the backups are going to for 1, and secondarily I would've expected (maybe wrongly) that the running image was stored in ram and wouldn't be read from the local HOST disk nor the Image store (on NFS) before backup (to NFS). These are all enterprise class (12gen Dell T620/720) servers with lots of ram (192G) and some local storage, but all of the VM data (images and backups) is on FreeNAS for HA functionality.
Also, the migrate function reports MUCH higher throughputs (see above - it's often over 800MB/s), which I would expect to be pulling from and writing to the FreeNAS Images store for both Nodes if that's the way VZDUMP works. And, as I show above, the GUEST VM can copy at over 200MB/s, and that's reading and writing to the FreeNAS shares simultaneously for sure.
So, as a test, I put the backup on different FreeNAS store so that the reads (Images are on FreeNAS #2) and writes (Backups are going to FreeNAS #1) are going to different NFS shares on the 10G SAN (101) network.
Incredibly at some points I'm seeting 0 KB/s read speeds, which I assume means it didn't need to read much, because I can't imagine such a low number, or why it wouldn't just fail? Clearly there's a fair amount about this I don't understand.
FN=FreeNAS host 191.168.101.101
FN=FreeNAS host 191.168.101.102
Code:
INFO: starting new backup job: vzdump 505 --storage FN1_Backup --node svr-04 --compress 0 --remove 0 --mode snapshot
INFO: Starting Backup of VM 505 (qemu)
INFO: Backup started at 2020-03-10 18:35:50
INFO: status = running
INFO: update VM 505: -lock backup
INFO: VM Name: guacamole
INFO: include disk 'scsi0' 'FN2_IMAGES:505/vm-505-disk-0.qcow2' 100G
INFO: backup mode: snapshot
INFO: bandwidth limit: 9,000,000 KB/s
INFO: ionice priority: 0
INFO: creating archive '/mnt/pve/FN1_Backup/dump/vzdump-qemu-505-2020_03_10-18_35_50.vma'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '50eea5ca-9f0d-438c-97d2-b172d306aafd'
INFO: status: 0% (279248896/107374182400), sparse 0% (106946560), duration 3, read/write 93/57 MB/s
INFO: status: 1% (1114177536/107374182400), sparse 0% (257196032), duration 14, read/write 75/62 MB/s
INFO: status: 2% (2214133760/107374182400), sparse 0% (1038684160), duration 26, read/write 91/26 MB/s
INFO: status: 3% (3261464576/107374182400), sparse 1% (2061148160), duration 38, read/write 87/2 MB/s
INFO: status: 4% (4299751424/107374182400), sparse 2% (3094855680), duration 48, read/write 103/0 MB/s
INFO: status: 5% (5441126400/107374182400), sparse 3% (4225875968), duration 59, read/write 103/0 MB/s
INFO: status: 6% (6483542016/107374182400), sparse 4% (4895895552), duration 70, read/write 94/33 MB/s
INFO: status: 7% (7590445056/107374182400), sparse 4% (5077061632), duration 87, read/write 65/54 MB/s
INFO: status: 8% (8676573184/107374182400), sparse 4% (5221580800), duration 100, read/write 83/72 MB/s
INFO: status: 9% (9677307904/107374182400), sparse 4% (5326868480), duration 114, read/write 71/63 MB/s
INFO: status: 10% (10803150848/107374182400), sparse 5% (5545205760), duration 130, read/write 70/56 MB/s
INFO: status: 11% (11881283584/107374182400), sparse 5% (5668978688), duration 145, read/write 71/63 MB/s
INFO: status: 12% (12910067712/107374182400), sparse 5% (5974421504), duration 159, read/write 73/51 MB/s
INFO: status: 13% (14004125696/107374182400), sparse 6% (6759571456), duration 172, read/write 84/23 MB/s
INFO: status: 14% (15057420288/107374182400), sparse 7% (7810551808), duration 182, read/write 105/0 MB/s
INFO: status: 15% (16203579392/107374182400), sparse 8% (8914354176), duration 193, read/write 104/3 MB/s
INFO: status: 16% (17265065984/107374182400), sparse 9% (9967771648), duration 203, read/write 106/0 MB/s
INFO: status: 17% (18291687424/107374182400), sparse 10% (10939265024), duration 213, read/write 102/5 MB/s
INFO: status: 18% (19344392192/107374182400), sparse 11% (11989024768), duration 223, read/write 105/0 MB/s