Unexpected Transfer Speeds

balpoint

New Member
May 3, 2024
2
0
1
Hi Guys,

I'm running have a PVE/PBS poc setup, the setup contains the following hardware:

  • PVE servers:
    • 3 x Dell R640 each with:
      • 2x CPU: Gold 6136 12C/24T
      • Memory: 384Gb
      • 3x PM1725a 3.2TB NVME (CePH)
      • 1Gb - Mgmt/Cluster traffic
      • 4x 10Gb intel x710 NIC
        • 10Gb - Backup network
        • 10Gb - Data network (trunk)
        • 10Gb - Storage network (Ceph and SAN connection)
  • PBS server:
    • 1x Dell R640
      • 2x CPU: Gold 6136 12C/24T
      • Memory: 384Gb
      • 8x KIOXIA PM6-R 900GB SSD (Raid0)
      • 4x 10Gb intel x710 NIC
        • 10Gb - Backup network
When i run a iperf3 from each of the PVE's to the PBS and vice versa it's a 10Gbps connection

Code:
Linux app1-test-proxmox-triple 6.8.4-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.4-2 (2024-04-10T17:36Z) x86_64
Control connection MSS 8948
Time: Fri, 03 May 2024 20:59:38 GMT
Connecting to host 192.168.81.10, port 5201
      Cookie: fyycmsaff4ee2itses4r6o5qrhyk6eqlaww6
      TCP MSS: 9000
[  5] local 192.168.81.101 port 40566 connected to 192.168.81.10 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.15 GBytes  9.90 Gbits/sec   72   1.37 MBytes
[  5]   1.00-2.00   sec  1.15 GBytes  9.90 Gbits/sec    3   1.37 MBytes
[  5]   2.00-3.00   sec  1.15 GBytes  9.90 Gbits/sec    2   1.37 MBytes
[  5]   3.00-4.00   sec  1.15 GBytes  9.89 Gbits/sec    2   1.37 MBytes
[  5]   4.00-5.00   sec  1.15 GBytes  9.90 Gbits/sec    0   1.37 MBytes
[  5]   5.00-6.00   sec  1.15 GBytes  9.90 Gbits/sec    1   1.37 MBytes
[  5]   6.00-7.00   sec  1.15 GBytes  9.90 Gbits/sec    0   1.37 MBytes
[  5]   7.00-8.00   sec  1.15 GBytes  9.89 Gbits/sec    0   1.37 MBytes
[  5]   8.00-9.00   sec  1.15 GBytes  9.90 Gbits/sec    1   1.37 MBytes
[  5]   9.00-10.00  sec  1.15 GBytes  9.90 Gbits/sec    0   1.37 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.5 GBytes  9.90 Gbits/sec   81             sender
[  5]   0.00-10.00  sec  11.5 GBytes  9.89 Gbits/sec                  receiver
CPU Utilization: local/sender 31.4% (1.0%u/30.4%s), remote/receiver 28.0% (3.5%u/24.5%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubic

All is connected to a Juniper QFX5120 switch of which the configuration also resembles a proper 10Gb link.

When i run a backup job from a PVE node the results are quite bad as to what i was expecting:


Code:
INFO: starting new backup job: vzdump 100 --notes-template '{{guestname}}' --storage bup1-test-proxmox --remove 0 --notification-mode auto --mode snapshot --node app1-test-proxmox-triple
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2024-05-03 23:00:23
INFO: status = running
INFO: VM Name: ubuntu24-desktop-test
INFO: include disk 'scsi0' 'pool1-ceph:vm-100-disk-0' 60G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/100/2024-05-03T21:00:23Z'
INFO: started backup task 'ec0af81d-c5ad-4794-89ca-f1db78057ea5'
INFO: resuming VM again
INFO: scsi0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO:   1% (644.0 MiB of 60.0 GiB) in 3s, read: 214.7 MiB/s, write: 180.0 MiB/s
INFO:   2% (1.4 GiB of 60.0 GiB) in 7s, read: 199.0 MiB/s, write: 199.0 MiB/s
INFO:   3% (2.2 GiB of 60.0 GiB) in 10s, read: 257.3 MiB/s, write: 220.0 MiB/s
INFO:   4% (2.9 GiB of 60.0 GiB) in 13s, read: 237.3 MiB/s, write: 237.3 MiB/s
INFO:  12% (7.7 GiB of 60.0 GiB) in 16s, read: 1.6 GiB/s, write: 217.3 MiB/s
INFO:  29% (17.9 GiB of 60.0 GiB) in 19s, read: 3.4 GiB/s, write: 90.7 MiB/s
INFO:  47% (28.3 GiB of 60.0 GiB) in 22s, read: 3.5 GiB/s, write: 133.3 MiB/s
INFO:  76% (46.1 GiB of 60.0 GiB) in 25s, read: 5.9 GiB/s, write: 53.3 MiB/s
INFO:  77% (46.6 GiB of 60.0 GiB) in 28s, read: 177.3 MiB/s, write: 173.3 MiB/s
INFO:  83% (50.2 GiB of 60.0 GiB) in 31s, read: 1.2 GiB/s, write: 198.7 MiB/s
INFO:  87% (52.4 GiB of 60.0 GiB) in 34s, read: 750.7 MiB/s, write: 202.7 MiB/s
INFO:  94% (56.7 GiB of 60.0 GiB) in 37s, read: 1.4 GiB/s, write: 178.7 MiB/s
INFO: 100% (60.0 GiB of 60.0 GiB) in 39s, read: 1.6 GiB/s, write: 112.0 MiB/s
INFO: backup is sparse: 53.09 GiB (88%) total zero data
INFO: backup was done incrementally, reused 53.48 GiB (89%)
INFO: transferred 60.00 GiB in 39 seconds (1.5 GiB/s)
INFO: adding notes to backup
INFO: Finished Backup of VM 100 (00:00:39)
INFO: Backup finished at 2024-05-03 23:01:02
INFO: Backup job finished successfully
INFO: notified via target `mail-to-root`
TASK OK

Any one any ideas of how to tackle this / tweak
 
Last edited:
we have an issue with one of our VM's and slow backup. we see this each time just on that vm during backup:

Code:
INFO: scsi0: dirty-bitmap status: existing bitmap was invalid and has been cleared

I am working on why that is , searched forum and see you have the same issue. I assume that bad bitmap is related to slow backup.
 
INFO: scsi0: dirty-bitmap status: existing bitmap was invalid and has been cleared
You can't backup to more than "one" destination tu use dirty-bitmap.
If you have another Proxmox Backup Server, you need to use Sync.
 
we have an issue with one of our VM's and slow backup. we see this each time just on that vm during backup:

Code:
INFO: scsi0: dirty-bitmap status: existing bitmap was invalid and has been cleared

I am working on why that is , searched forum and see you have the same issue. I assume that bad bitmap is related to slow backup.
The dirty-bitmapping only works when:
- you don't stop the VM
- you don't reboot the server
- you don't use LXCs
- you don't use stop-mode backups as these will stop the VM
- you always backup the VM to the same PBS datastore
 
  • Like
Reactions: RobFantini
The dirty-bitmapping only works when:
- you don't stop the VM
- you don't reboot the server
- you don't use LXCs
- you don't use stop-mode backups as these will stop the VM
- you always backup the VM to the same PBS datastore

Thank you for that data.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!