If I upgrade to 10/20 Gbit, would the speed increase?

fireon

Distinguished Member
Oct 25, 2010
4,121
388
153
41
Austria/Graz
iteas.at
Hello all,

i've done some tests with the backup. Always the whole Cluster. Nobody works on the cluster during this time. What i have tested:

  • Fullbackup with normal vzdump over Gigabit form ZFS Raid10 to an NFS4.2 Share on the Backupserver with 8HDD's in RaidZ10 with compression ZST. Size 102.68GB in 00:40:07
  • The First Fullbackup with new PBS. Size 334.63GB in 00:40:07

I noticed that the network with PBS was never exhausted, always only 20 to 50MB / s, fluctuating. For this reason also my question. If I upgrade to 10/20 Gbit, would the speed increase?

If i do an normal ZFS SEND copy it about 120MB/s. How is the whole to be understood?

I've started one really big VM with 7TB. I've aborted the process after some hours. It was so slow. At least it looked like that. Here es the Log:
Code:
118: 2020-09-06 01:14:40 INFO: Starting Backup of VM 118 (qemu)
118: 2020-09-06 01:14:40 INFO: status = running
118: 2020-09-06 01:14:40 INFO: VM Name: data.tux.lan
118: 2020-09-06 01:14:40 INFO: include disk 'scsi0' 'SSD-vmdata:vm-118-disk-1' 30G
118: 2020-09-06 01:14:40 INFO: include disk 'scsi1' 'SSD-vmdata:vm-118-disk-2' 8G
118: 2020-09-06 01:14:40 INFO: include disk 'scsi2' 'HDD-vmdata:vm-118-disk-3' 7000G
118: 2020-09-06 01:14:40 INFO: include disk 'efidisk0' 'SSD-vmdata:vm-118-disk-0' 128K
118: 2020-09-06 01:14:40 INFO: backup mode: snapshot
118: 2020-09-06 01:14:40 INFO: ionice priority: 7
118: 2020-09-06 01:14:40 INFO: creating Proxmox Backup Server archive 'vm/118/2020-09-05T23:14:40Z'
118: 2020-09-06 01:14:40 INFO: issuing guest-agent 'fs-freeze' command
118: 2020-09-06 01:14:41 INFO: issuing guest-agent 'fs-thaw' command
118: 2020-09-06 01:14:41 INFO: started backup task '8703edfe-c042-41a3-9da8-f56b4bc3b040'
118: 2020-09-06 01:14:41 INFO: resuming VM again
118: 2020-09-06 01:14:41 INFO: efidisk0: dirty-bitmap status: existing bitmap was invalid and has been cleared
118: 2020-09-06 01:14:41 INFO: scsi0: dirty-bitmap status: existing bitmap was invalid and has been cleared
118: 2020-09-06 01:14:41 INFO: scsi1: dirty-bitmap status: existing bitmap was invalid and has been cleared
118: 2020-09-06 01:14:41 INFO: scsi2: dirty-bitmap status: created new
118: 2020-09-06 01:14:44 INFO:   0% (1.2 GiB of 6.9 TiB) in  3s, read: 409.4 MiB/s, write: 82.7 MiB/s
118: 2020-09-06 01:47:26 INFO:   1% (70.4 GiB of 6.9 TiB) in 32m 45s, read: 36.1 MiB/s, write: 34.0 MiB/s
118: 2020-09-06 02:51:59 INFO:   2% (140.8 GiB of 6.9 TiB) in  1h 37m 18s, read: 18.6 MiB/s, write: 17.4 MiB/s
118: 2020-09-06 03:25:00 INFO:   3% (211.2 GiB of 6.9 TiB) in  2h 10m 19s, read: 36.4 MiB/s, write: 34.2 MiB/s
118: 2020-09-06 04:05:06 INFO:   4% (281.5 GiB of 6.9 TiB) in  2h 50m 25s, read: 29.9 MiB/s, write: 28.1 MiB/s
118: 2020-09-06 04:39:58 INFO:   5% (351.9 GiB of 6.9 TiB) in  3h 25m 17s, read: 34.5 MiB/s, write: 32.4 MiB/s
118: 2020-09-06 05:16:08 INFO:   6% (422.3 GiB of 6.9 TiB) in  4h  1m 27s, read: 33.2 MiB/s, write: 31.2 MiB/s
118: 2020-09-06 05:57:23 INFO:   7% (492.7 GiB of 6.9 TiB) in  4h 42m 42s, read: 29.1 MiB/s, write: 27.4 MiB/s
118: 2020-09-06 06:37:33 INFO:   8% (563.1 GiB of 6.9 TiB) in  5h 22m 52s, read: 29.9 MiB/s, write: 27.6 MiB/s
118: 2020-09-06 07:23:03 INFO:   9% (633.4 GiB of 6.9 TiB) in  6h  8m 22s, read: 26.4 MiB/s, write: 24.8 MiB/s
118: 2020-09-06 08:08:47 INFO:  10% (703.8 GiB of 6.9 TiB) in  6h 54m  6s, read: 26.3 MiB/s, write: 24.6 MiB/s
118: 2020-09-06 08:56:30 INFO:  11% (774.2 GiB of 6.9 TiB) in  7h 41m 49s, read: 25.2 MiB/s, write: 23.7 MiB/s
118: 2020-09-06 09:43:35 INFO:  12% (844.6 GiB of 6.9 TiB) in  8h 28m 54s, read: 25.5 MiB/s, write: 24.0 MiB/s
118: 2020-09-06 10:29:04 INFO:  13% (914.9 GiB of 6.9 TiB) in  9h 14m 23s, read: 26.4 MiB/s, write: 24.8 MiB/s
118: 2020-09-06 11:15:14 INFO:  14% (985.4 GiB of 6.9 TiB) in 10h  0m 33s, read: 26.0 MiB/s, write: 24.5 MiB/s
118: 2020-09-06 12:01:01 INFO:  15% (1.0 TiB of 6.9 TiB) in 10h 46m 20s, read: 26.2 MiB/s, write: 24.8 MiB/s
118: 2020-09-06 12:47:04 INFO:  16% (1.1 TiB of 6.9 TiB) in 11h 32m 23s, read: 26.1 MiB/s, write: 24.7 MiB/s
118: 2020-09-06 13:32:11 INFO:  17% (1.2 TiB of 6.9 TiB) in 12h 17m 30s, read: 26.6 MiB/s, write: 25.2 MiB/s
118: 2020-09-06 13:42:59 ERROR: interrupted by signal
118: 2020-09-06 13:42:59 INFO: aborting backup job
118: 2020-09-06 13:42:59 ERROR: Backup of VM 118 failed - interrupted by signal

Backup Speedtest:
Code:
proxmox-backup-client benchmark
Password for "xxxxxxxxx@pbs": ******************
Uploaded 147 chunks in 5 seconds.
Time per request: 36374 microseconds.
TLS speed: 115.31 MB/s
SHA256 speed: 1382.31 MB/s
Compression speed: 1525.76 MB/s
Decompress speed: 7045.81 MB/s
AES256/GCM speed: 2523.92 MB/s
┌───────────────────────────────────┬────────────────────┐
│ Name                              │ Value              │
╞═══════════════════════════════════╪════════════════════╡
│ TLS (maximal backup upload speed) │ 115.31 MB/s (20%)  │
├───────────────────────────────────┼────────────────────┤
│ SHA256 checksum computation speed │ 1382.31 MB/s (65%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 compression speed    │ 1525.76 MB/s (71%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 decompression speed  │ 7045.81 MB/s (87%) │
├───────────────────────────────────┼────────────────────┤
│ AES256 GCM encryption speed       │ 2523.92 MB/s (66%) │
└───────────────────────────────────┴────────────────────┘
proxmox-backup-client benchmark  7,42s user 0,44s system 30% cpu 25,781 total
 
we found some bottlenecks related to TLS/HTTP2 speed in some environments, it would be great if you could test upgraded packages once they are released!
 
we found some bottlenecks related to TLS/HTTP2 speed in some environments, it would be great if you could test upgraded packages once they are released!
Yeah... absolutly. I wait to backup my 7TB VM on external disk. Tell me if i should upgrade and test.
 
proxmox-backup-server and -client >= 0.8.15-1 contain the changes (bumped, but not yet on pbstest/pvetest), and libproxmox-backup-qemu0 >= 0.6.5 should contain it as well (not yet updated and bumped). for VMs, you need to stop and start the VM to load the updated version of the backup library.
 
now available on pbstest / pve-no-subscription. there is one more improvement to the TLS speed that is not yet part of the packaged version, but maybe you can try running the benchmark now and then again once that hits the repos?
 
Doesn't seem to work anymore.

Code:
export PBS_REPOSITORY="backupuser@pbs@tuxi:backupshare"
proxmox-backup-client benchmark
Error: unable to run benchmark without --benchmark flags
 
you need to update both client and server (the benchmark now marks benchmark runs with a special flag, and the benchmark backup ID is reserved for backups with that flag so that no accidental name collision can happen)
 
Here the new one after the Upgrade:
Code:
Uploaded 146 chunks in 5 seconds.
Time per request: 35964 microseconds.
TLS speed: 116.62 MB/s
SHA256 speed: 1373.66 MB/s
Compression speed: 1496.64 MB/s
Decompress speed: 6439.88 MB/s
AES256/GCM speed: 2730.96 MB/s
┌───────────────────────────────────┬────────────────────┐
│ Name                              │ Value              │
╞═══════════════════════════════════╪════════════════════╡
│ TLS (maximal backup upload speed) │ 116.62 MB/s (17%)  │
├───────────────────────────────────┼────────────────────┤
│ SHA256 checksum computation speed │ 1373.66 MB/s (65%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 compression speed    │ 1496.64 MB/s (69%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 decompression speed  │ 6439.88 MB/s (80%) │
├───────────────────────────────────┼────────────────────┤
│ AES256 GCM encryption speed       │ 2730.96 MB/s (72%) │
└───────────────────────────────────┴────────────────────┘
proxmox-backup-client benchmark  7,21s user 0,33s system 34% cpu 21,786 total
 
how fast is HDD-vmdata when you benchmark inside the VM? how fast from the host?
 
How can i do this? Pveperf did not work on this node. No error. Output only to "HD SIZE". After that nothing. After 1h noting. The only way is to kill the command. On the other both nodes pveperf do the job.

So how can i do this on the host, and on the VM. I remember... if scrub on ZFS is running, there about 800MB/s.
 
Ok, here it is. The first is from an normal Debian VM on the ZFS Raid10 with 10HDD's WD Red Pro and one Enterprise SSD vor Log and Cache.

Code:
agent: 1
bios: ovmf
bootdisk: scsi0
cores: 10
cpu: host
efidisk0: HDD-vmdata:vm-101-disk-1,size=1M
ide2: iso-images:iso/UCS-Installation-amd64.iso,media=cdrom
memory: 4096
name: test
net0: virtio=E6:9F:C7:03:B7:98,bridge=vmbr0,firewall=1,tag=50
numa: 1
ostype: l26
scsi0: HDD-vmdata:vm-101-disk-0,discard=on,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=70d7ea9a-e2f3-45c5-8cf9-219d0cd2b010
sockets: 1
vga: virtio
vmgenid: 4204d61f-f61f-40cc-a1cb-02b838923539

Test one with 4k Blocksize in the VM with 10GB Testfile:
Code:
   bla
Test 2 in VM with 8k Blocksize with 10GB Testfile:
Code:
test: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 3 (f=3): [_(1),R(3)] [99.7% done] [93749KB/0KB/0KB /s] [11.8K/0/0 iops] [eta 00m:01s]
test: (groupid=0, jobs=4): err= 0: pid=10095: Sat Sep 26 15:33:24 2020
read : io=40960MB, bw=145257KB/s, iops=18157, runt=288751msec
slat (usec): min=3, max=2611, avg=17.87, stdev= 7.65
clat (usec): min=0, max=412946, avg=199.52, stdev=4824.00
lat (usec): min=43, max=412952, avg=217.40, stdev=4823.82
clat percentiles (usec):
| 1.00th=[ 50], 5.00th=[ 54], 10.00th=[ 59], 20.00th=[ 64],
| 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 86],
| 70.00th=[ 93], 80.00th=[ 102], 90.00th=[ 116], 95.00th=[ 131],
| 99.00th=[ 219], 99.50th=[ 258], 99.90th=[ 1208], 99.95th=[99840],
| 99.99th=[226304]
lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.76%
lat (usec) : 100=76.84%, 250=21.82%, 500=0.44%, 750=0.02%, 1000=0.01%
lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%, 250=0.05%, 500=0.01%
cpu : usr=1.48%, sys=13.69%, ctx=5244238, majf=0, minf=55
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=5242880/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: io=40960MB, aggrb=145256KB/s, minb=145256KB/s, maxb=145256KB/s, mint=288751msec, maxt=288751msec

Disk stats (read/write):
sda: ios=5239960/184, merge=156/175, ticks=1011608/5388, in_queue=1013416, util=99.82%
And here the directly Test 8k on Proxmox with 30GB Testfile:

Code:
test: Laying out IO file (1 file / 30720MiB)
Jobs: 4 (f=4): [R(4)][100.0%][r=1754MiB/s][r=224k IOPS][eta 00m:00s]
test: (groupid=0, jobs=4): err= 0: pid=30686: Sat Sep 26 15:25:20 2020
  read: IOPS=216k, BW=1685MiB/s (1766MB/s)(120GiB/72941msec)
    slat (nsec): min=1940, max=136718k, avg=17616.96, stdev=328349.54
    clat (nsec): min=270, max=118109, avg=425.72, stdev=255.37
     lat (usec): min=2, max=136725, avg=18.15, stdev=328.39
    clat percentiles (nsec):
     |  1.00th=[  302],  5.00th=[  310], 10.00th=[  310], 20.00th=[  310],
     | 30.00th=[  322], 40.00th=[  330], 50.00th=[  422], 60.00th=[  442],
     | 70.00th=[  462], 80.00th=[  482], 90.00th=[  620], 95.00th=[  660],
     | 99.00th=[  868], 99.50th=[  964], 99.90th=[ 1576], 99.95th=[ 2736],
     | 99.99th=[ 9920]
   bw (  KiB/s): min=205568, max=626624, per=24.97%, avg=430782.17, stdev=86547.64, samples=580
   iops        : min=25696, max=78328, avg=53847.76, stdev=10818.46, samples=580
  lat (nsec)   : 500=81.59%, 750=16.66%, 1000=1.36%
  lat (usec)   : 2=0.31%, 4=0.03%, 10=0.03%, 20=0.01%, 50=0.01%
  lat (usec)   : 100=0.01%, 250=0.01%
  cpu          : usr=4.91%, sys=55.54%, ctx=146885, majf=0, minf=55
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=15728640,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=1685MiB/s (1766MB/s), 1685MiB/s-1685MiB/s (1766MB/s-1766MB/s), io=120GiB (129GB), run=72941-72941msec
fio --rw=read --name=test --size=30G --filename=/v-machines/test/testfile      21,35s user 192,64s system 145% cpu 2:26,75 total
 
Last edited:
I'd retest with a zvol to be closer to what the VM is using, but a VM with a filesystem on top of a zvol will always be slower than directly accessing a ZFS dataset on the host..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!