Slow Backup Transfer Speeds between Nodes and PBS

helojunkie

Well-Known Member
Jul 28, 2017
69
2
48
56
San Diego, CA
We have three Dell R640 servers; all NVMe-backed zfs storage, 1TB RAM each, 100G main interface, and 10G dedicated Corosync interface, all connected to Cisco Nexux 9k switches. Our PBS is also a Dell R640, 100% NVMe backed storage (RAIDZ2), 256GB RAM, 100G main interface only.

Running iperf3 in parallel, I can almost totally saturate the 100G links between the cluster nodes and the proxmox backup server.

But I consistently see terrible network transfer times for my backups and different backup times based on the VM:

Code:
137: 2024-02-18 10:02:15 INFO: transferred 60.00 GiB in 96 seconds (640.0 MiB/s)
137: 2024-02-18 10:17:21 INFO: transferred 60.00 GiB in 101 seconds (608.3 MiB/s)


144: 2024-02-18 10:10:42 INFO: transferred 53.64 GiB in 480 seconds (114.4 MiB/s)
144: 2024-02-18 11:14:08 INFO: transferred 92.70 GiB in 836 seconds (113.6 MiB/s)
144: 2024-02-18 11:25:08 INFO: transferred 66.81 GiB in 605 seconds (113.1 MiB/s)


Outside of Proxmox, we have a LOT of 10G and 100G connected servers including almost 14PiB hosted on multiple TrueNAS Scale systems and I have no issues with throughput during replication on those systems, so I am pretty sure my issue is not network related, but this is my first Proxmox Backup server so I am not sure if this is normal, or if I should be looking for other problems or ways to tune these systems.

Thanks for any advice!
 
transferred data is only the new data,
what about total size ?

PBS will new read all source if the "dirty map" is new ("dirty bitmap is new each time the VM is restarded).
 
transferred data is only the new data,
what about total size ?

PBS will new read all source if the "dirty map" is new ("dirty bitmap is new each time the VM is restarded).
Hi _gabriel

In these examples, 137 is 60GiB (and not even running at all), and 144 is 300GiB and has not been rebooted in weeks.
 
a full log will help.

edit: for 137, numbers seems good.
PBS can't be fast as vzdump, because of dedup+compression+aes
 
Last edited:
Code:
vzdump 100 105 106 107 108 110 112 113 118 120 142 143 144 --mailnotification failure --storage ProxBackup01 --notes-template '{{guestname}}' --mode snapshot --all 0 --node proxmox02


144: 2024-02-18 11:15:02 INFO: Starting Backup of VM 144 (qemu)
144: 2024-02-18 11:15:02 INFO: status = running
144: 2024-02-18 11:15:02 INFO: VM Name: Grandmaster
144: 2024-02-18 11:15:02 INFO: include disk 'scsi0' 'ssdimages:vm-144-disk-0' 300G
144: 2024-02-18 11:15:02 INFO: backup mode: snapshot
144: 2024-02-18 11:15:02 INFO: ionice priority: 7
144: 2024-02-18 11:15:02 INFO: snapshots found (not included into backup)
144: 2024-02-18 11:15:02 INFO: creating Proxmox Backup Server archive 'vm/144/2024-02-18T19:15:02Z'
144: 2024-02-18 11:15:02 INFO: issuing guest-agent 'fs-freeze' command
144: 2024-02-18 11:15:03 INFO: issuing guest-agent 'fs-thaw' command
144: 2024-02-18 11:15:03 INFO: started backup task '4706eef5-b7d1-4e0c-bd6e-29f0bf88356b'
144: 2024-02-18 11:15:03 INFO: resuming VM again
144: 2024-02-18 11:15:03 INFO: scsi0: dirty-bitmap status: OK (66.8 GiB of 300.0 GiB dirty)
144: 2024-02-18 11:15:03 INFO: using fast incremental mode (dirty-bitmap), 66.8 GiB dirty of 300.0 GiB total
144: 2024-02-18 11:15:06 INFO:   0% (368.0 MiB of 66.8 GiB) in 3s, read: 122.7 MiB/s, write: 121.3 MiB/s
144: 2024-02-18 11:15:09 INFO:   1% (728.0 MiB of 66.8 GiB) in 6s, read: 120.0 MiB/s, write: 120.0 MiB/s
144: 2024-02-18 11:15:15 INFO:   2% (1.4 GiB of 66.8 GiB) in 12s, read: 116.0 MiB/s, write: 116.0 MiB/s
144: 2024-02-18 11:15:21 INFO:   3% (2.1 GiB of 66.8 GiB) in 18s, read: 118.0 MiB/s, write: 117.3 MiB/s
144: 2024-02-18 11:15:27 INFO:   4% (2.8 GiB of 66.8 GiB) in 24s, read: 118.7 MiB/s, write: 118.7 MiB/s
144: 2024-02-18 11:15:33 INFO:   5% (3.5 GiB of 66.8 GiB) in 30s, read: 115.3 MiB/s, write: 115.3 MiB/s
144: 2024-02-18 11:15:38 INFO:   6% (4.0 GiB of 66.8 GiB) in 35s, read: 115.2 MiB/s, write: 114.4 MiB/s
144: 2024-02-18 11:15:44 INFO:   7% (4.7 GiB of 66.8 GiB) in 41s, read: 113.3 MiB/s, write: 113.3 MiB/s
144: 2024-02-18 11:15:51 INFO:   8% (5.5 GiB of 66.8 GiB) in 48s, read: 114.3 MiB/s, write: 114.3 MiB/s
144: 2024-02-18 11:15:56 INFO:   9% (6.0 GiB of 66.8 GiB) in 53s, read: 113.6 MiB/s, write: 113.6 MiB/s
144: 2024-02-18 11:16:03 INFO:  10% (6.7 GiB of 66.8 GiB) in 1m, read: 98.9 MiB/s, write: 98.3 MiB/s
144: 2024-02-18 11:16:10 INFO:  11% (7.4 GiB of 66.8 GiB) in 1m 7s, read: 107.4 MiB/s, write: 107.4 MiB/s
144: 2024-02-18 11:16:16 INFO:  12% (8.1 GiB of 66.8 GiB) in 1m 13s, read: 109.3 MiB/s, write: 108.0 MiB/s
144: 2024-02-18 11:16:22 INFO:  13% (8.7 GiB of 66.8 GiB) in 1m 19s, read: 110.7 MiB/s, write: 108.0 MiB/s
144: 2024-02-18 11:16:28 INFO:  14% (9.4 GiB of 66.8 GiB) in 1m 25s, read: 112.7 MiB/s, write: 112.0 MiB/s
144: 2024-02-18 11:16:34 INFO:  15% (10.1 GiB of 66.8 GiB) in 1m 31s, read: 126.7 MiB/s, write: 126.7 MiB/s
144: 2024-02-18 11:16:39 INFO:  16% (10.8 GiB of 66.8 GiB) in 1m 36s, read: 130.4 MiB/s, write: 128.8 MiB/s
144: 2024-02-18 11:16:45 INFO:  17% (11.4 GiB of 66.8 GiB) in 1m 42s, read: 116.7 MiB/s, write: 116.7 MiB/s
144: 2024-02-18 11:16:51 INFO:  18% (12.1 GiB of 66.8 GiB) in 1m 48s, read: 114.0 MiB/s, write: 113.3 MiB/s
144: 2024-02-18 11:16:56 INFO:  19% (12.7 GiB of 66.8 GiB) in 1m 53s, read: 121.6 MiB/s, write: 120.8 MiB/s
144: 2024-02-18 11:17:02 INFO:  20% (13.4 GiB of 66.8 GiB) in 1m 59s, read: 119.3 MiB/s, write: 118.7 MiB/s
144: 2024-02-18 11:17:08 INFO:  21% (14.1 GiB of 66.8 GiB) in 2m 5s, read: 121.3 MiB/s, write: 118.0 MiB/s
144: 2024-02-18 11:17:14 INFO:  22% (14.8 GiB of 66.8 GiB) in 2m 11s, read: 111.3 MiB/s, write: 110.0 MiB/s
144: 2024-02-18 11:17:20 INFO:  23% (15.4 GiB of 66.8 GiB) in 2m 17s, read: 110.7 MiB/s, write: 110.0 MiB/s
144: 2024-02-18 11:17:26 INFO:  24% (16.1 GiB of 66.8 GiB) in 2m 23s, read: 112.7 MiB/s, write: 110.7 MiB/s
144: 2024-02-18 11:17:32 INFO:  25% (16.8 GiB of 66.8 GiB) in 2m 29s, read: 128.0 MiB/s, write: 126.7 MiB/s
144: 2024-02-18 11:17:37 INFO:  26% (17.4 GiB of 66.8 GiB) in 2m 34s, read: 124.8 MiB/s, write: 124.8 MiB/s
144: 2024-02-18 11:17:44 INFO:  27% (18.1 GiB of 66.8 GiB) in 2m 41s, read: 99.4 MiB/s, write: 98.9 MiB/s
144: 2024-02-18 11:17:50 INFO:  28% (18.8 GiB of 66.8 GiB) in 2m 47s, read: 116.0 MiB/s, write: 116.0 MiB/s
144: 2024-02-18 11:17:56 INFO:  29% (19.5 GiB of 66.8 GiB) in 2m 53s, read: 116.7 MiB/s, write: 116.0 MiB/s
144: 2024-02-18 11:18:01 INFO:  30% (20.1 GiB of 66.8 GiB) in 2m 58s, read: 119.2 MiB/s, write: 119.2 MiB/s
144: 2024-02-18 11:18:07 INFO:  31% (20.7 GiB of 66.8 GiB) in 3m 4s, read: 116.0 MiB/s, write: 116.0 MiB/s
144: 2024-02-18 11:18:14 INFO:  32% (21.5 GiB of 66.8 GiB) in 3m 11s, read: 108.0 MiB/s, write: 106.9 MiB/s
144: 2024-02-18 11:18:20 INFO:  33% (22.1 GiB of 66.8 GiB) in 3m 17s, read: 112.7 MiB/s, write: 112.7 MiB/s
144: 2024-02-18 11:18:26 INFO:  34% (22.8 GiB of 66.8 GiB) in 3m 23s, read: 120.7 MiB/s, write: 119.3 MiB/s
144: 2024-02-18 11:18:31 INFO:  35% (23.5 GiB of 66.8 GiB) in 3m 28s, read: 132.8 MiB/s, write: 132.8 MiB/s
144: 2024-02-18 11:18:36 INFO:  36% (24.1 GiB of 66.8 GiB) in 3m 33s, read: 128.8 MiB/s, write: 128.8 MiB/s
144: 2024-02-18 11:18:41 INFO:  37% (24.8 GiB of 66.8 GiB) in 3m 38s, read: 139.2 MiB/s, write: 138.4 MiB/s
144: 2024-02-18 11:18:47 INFO:  38% (25.5 GiB of 66.8 GiB) in 3m 44s, read: 120.0 MiB/s, write: 119.3 MiB/s
144: 2024-02-18 11:18:52 INFO:  39% (26.1 GiB of 66.8 GiB) in 3m 49s, read: 118.4 MiB/s, write: 117.6 MiB/s
144: 2024-02-18 11:18:58 INFO:  40% (26.8 GiB of 66.8 GiB) in 3m 55s, read: 118.0 MiB/s, write: 118.0 MiB/s
144: 2024-02-18 11:19:04 INFO:  41% (27.5 GiB of 66.8 GiB) in 4m 1s, read: 117.3 MiB/s, write: 117.3 MiB/s
144: 2024-02-18 11:19:10 INFO:  42% (28.1 GiB of 66.8 GiB) in 4m 7s, read: 114.7 MiB/s, write: 114.0 MiB/s
144: 2024-02-18 11:19:16 INFO:  43% (28.8 GiB of 66.8 GiB) in 4m 13s, read: 114.7 MiB/s, write: 112.7 MiB/s
144: 2024-02-18 11:19:23 INFO:  44% (29.5 GiB of 66.8 GiB) in 4m 20s, read: 102.3 MiB/s, write: 101.1 MiB/s
144: 2024-02-18 11:19:29 INFO:  45% (30.2 GiB of 66.8 GiB) in 4m 26s, read: 113.3 MiB/s, write: 113.3 MiB/s
144: 2024-02-18 11:19:35 INFO:  46% (30.8 GiB of 66.8 GiB) in 4m 32s, read: 111.3 MiB/s, write: 109.3 MiB/s
144: 2024-02-18 11:19:41 INFO:  47% (31.5 GiB of 66.8 GiB) in 4m 38s, read: 112.0 MiB/s, write: 111.3 MiB/s
144: 2024-02-18 11:19:47 INFO:  48% (32.1 GiB of 66.8 GiB) in 4m 44s, read: 110.0 MiB/s, write: 109.3 MiB/s
144: 2024-02-18 11:19:53 INFO:  49% (32.8 GiB of 66.8 GiB) in 4m 50s, read: 115.3 MiB/s, write: 112.0 MiB/s
144: 2024-02-18 11:19:59 INFO:  50% (33.5 GiB of 66.8 GiB) in 4m 56s, read: 114.0 MiB/s, write: 112.7 MiB/s
144: 2024-02-18 11:20:06 INFO:  51% (34.2 GiB of 66.8 GiB) in 5m 3s, read: 102.9 MiB/s, write: 102.3 MiB/s
144: 2024-02-18 11:20:12 INFO:  52% (34.8 GiB of 66.8 GiB) in 5m 9s, read: 102.0 MiB/s, write: 100.0 MiB/s
144: 2024-02-18 11:20:19 INFO:  53% (35.5 GiB of 66.8 GiB) in 5m 16s, read: 106.9 MiB/s, write: 105.1 MiB/s
144: 2024-02-18 11:20:25 INFO:  54% (36.1 GiB of 66.8 GiB) in 5m 22s, read: 104.7 MiB/s, write: 103.3 MiB/s
144: 2024-02-18 11:20:32 INFO:  55% (36.9 GiB of 66.8 GiB) in 5m 29s, read: 110.9 MiB/s, write: 109.1 MiB/s
144: 2024-02-18 11:20:37 INFO:  56% (37.5 GiB of 66.8 GiB) in 5m 34s, read: 121.6 MiB/s, write: 119.2 MiB/s
144: 2024-02-18 11:20:43 INFO:  57% (38.2 GiB of 66.8 GiB) in 5m 40s, read: 120.0 MiB/s, write: 118.7 MiB/s
144: 2024-02-18 11:20:49 INFO:  58% (38.8 GiB of 66.8 GiB) in 5m 46s, read: 108.7 MiB/s, write: 104.0 MiB/s
144: 2024-02-18 11:20:55 INFO:  59% (39.4 GiB of 66.8 GiB) in 5m 52s, read: 108.7 MiB/s, write: 107.3 MiB/s
144: 2024-02-18 11:21:03 INFO:  60% (40.1 GiB of 66.8 GiB) in 6m, read: 90.0 MiB/s, write: 89.5 MiB/s
144: 2024-02-18 11:21:09 INFO:  61% (40.8 GiB of 66.8 GiB) in 6m 6s, read: 105.3 MiB/s, write: 103.3 MiB/s
144: 2024-02-18 11:21:17 INFO:  62% (41.5 GiB of 66.8 GiB) in 6m 14s, read: 94.0 MiB/s, write: 93.0 MiB/s
144: 2024-02-18 11:21:23 INFO:  63% (42.1 GiB of 66.8 GiB) in 6m 20s, read: 105.3 MiB/s, write: 104.0 MiB/s
144: 2024-02-18 11:21:30 INFO:  64% (42.8 GiB of 66.8 GiB) in 6m 27s, read: 104.6 MiB/s, write: 104.0 MiB/s
144: 2024-02-18 11:21:36 INFO:  65% (43.4 GiB of 66.8 GiB) in 6m 33s, read: 104.0 MiB/s, write: 103.3 MiB/s
144: 2024-02-18 11:21:43 INFO:  66% (44.2 GiB of 66.8 GiB) in 6m 40s, read: 105.7 MiB/s, write: 104.0 MiB/s
144: 2024-02-18 11:21:50 INFO:  67% (44.9 GiB of 66.8 GiB) in 6m 47s, read: 104.0 MiB/s, write: 104.0 MiB/s
144: 2024-02-18 11:21:55 INFO:  68% (45.4 GiB of 66.8 GiB) in 6m 52s, read: 117.6 MiB/s, write: 117.6 MiB/s
144: 2024-02-18 11:22:02 INFO:  69% (46.2 GiB of 66.8 GiB) in 6m 59s, read: 109.1 MiB/s, write: 108.0 MiB/s
144: 2024-02-18 11:22:08 INFO:  70% (46.8 GiB of 66.8 GiB) in 7m 5s, read: 107.3 MiB/s, write: 106.7 MiB/s
144: 2024-02-18 11:22:14 INFO:  71% (47.5 GiB of 66.8 GiB) in 7m 11s, read: 110.0 MiB/s, write: 109.3 MiB/s
144: 2024-02-18 11:22:20 INFO:  72% (48.1 GiB of 66.8 GiB) in 7m 17s, read: 118.0 MiB/s, write: 117.3 MiB/s
144: 2024-02-18 11:22:26 INFO:  73% (48.8 GiB of 66.8 GiB) in 7m 23s, read: 116.0 MiB/s, write: 115.3 MiB/s
144: 2024-02-18 11:22:32 INFO:  74% (49.5 GiB of 66.8 GiB) in 7m 29s, read: 107.3 MiB/s, write: 106.7 MiB/s
144: 2024-02-18 11:22:40 INFO:  75% (50.2 GiB of 66.8 GiB) in 7m 37s, read: 91.0 MiB/s, write: 91.0 MiB/s
144: 2024-02-18 11:22:48 INFO:  76% (50.8 GiB of 66.8 GiB) in 7m 45s, read: 81.5 MiB/s, write: 80.5 MiB/s
144: 2024-02-18 11:22:55 INFO:  77% (51.5 GiB of 66.8 GiB) in 7m 52s, read: 106.3 MiB/s, write: 105.1 MiB/s
144: 2024-02-18 11:23:01 INFO:  78% (52.2 GiB of 66.8 GiB) in 7m 58s, read: 106.0 MiB/s, write: 104.7 MiB/s
144: 2024-02-18 11:23:07 INFO:  79% (52.8 GiB of 66.8 GiB) in 8m 4s, read: 112.7 MiB/s, write: 112.0 MiB/s
144: 2024-02-18 11:23:13 INFO:  80% (53.5 GiB of 66.8 GiB) in 8m 10s, read: 120.0 MiB/s, write: 118.7 MiB/s
144: 2024-02-18 11:23:19 INFO:  81% (54.1 GiB of 66.8 GiB) in 8m 16s, read: 105.3 MiB/s, write: 104.7 MiB/s
144: 2024-02-18 11:23:26 INFO:  82% (54.9 GiB of 66.8 GiB) in 8m 23s, read: 109.7 MiB/s, write: 108.0 MiB/s
144: 2024-02-18 11:23:32 INFO:  83% (55.5 GiB of 66.8 GiB) in 8m 29s, read: 104.7 MiB/s, write: 102.7 MiB/s
144: 2024-02-18 11:23:38 INFO:  84% (56.2 GiB of 66.8 GiB) in 8m 35s, read: 112.0 MiB/s, write: 112.0 MiB/s
144: 2024-02-18 11:23:45 INFO:  85% (56.9 GiB of 66.8 GiB) in 8m 42s, read: 105.7 MiB/s, write: 104.6 MiB/s
144: 2024-02-18 11:23:51 INFO:  86% (57.5 GiB of 66.8 GiB) in 8m 48s, read: 110.7 MiB/s, write: 110.7 MiB/s
144: 2024-02-18 11:23:57 INFO:  87% (58.2 GiB of 66.8 GiB) in 8m 54s, read: 107.3 MiB/s, write: 104.7 MiB/s
144: 2024-02-18 11:24:02 INFO:  88% (58.8 GiB of 66.8 GiB) in 8m 59s, read: 140.8 MiB/s, write: 140.8 MiB/s
144: 2024-02-18 11:24:07 INFO:  89% (59.6 GiB of 66.8 GiB) in 9m 4s, read: 148.0 MiB/s, write: 146.4 MiB/s
144: 2024-02-18 11:24:11 INFO:  90% (60.1 GiB of 66.8 GiB) in 9m 8s, read: 145.0 MiB/s, write: 145.0 MiB/s
144: 2024-02-18 11:24:17 INFO:  91% (60.8 GiB of 66.8 GiB) in 9m 14s, read: 121.3 MiB/s, write: 121.3 MiB/s
144: 2024-02-18 11:24:22 INFO:  92% (61.5 GiB of 66.8 GiB) in 9m 19s, read: 144.8 MiB/s, write: 143.2 MiB/s
144: 2024-02-18 11:24:26 INFO:  93% (62.2 GiB of 66.8 GiB) in 9m 23s, read: 156.0 MiB/s, write: 156.0 MiB/s
144: 2024-02-18 11:24:31 INFO:  94% (62.9 GiB of 66.8 GiB) in 9m 28s, read: 152.0 MiB/s, write: 152.0 MiB/s
144: 2024-02-18 11:24:36 INFO:  95% (63.6 GiB of 66.8 GiB) in 9m 33s, read: 136.0 MiB/s, write: 135.2 MiB/s
144: 2024-02-18 11:24:41 INFO:  96% (64.1 GiB of 66.8 GiB) in 9m 38s, read: 119.2 MiB/s, write: 117.6 MiB/s
144: 2024-02-18 11:24:47 INFO:  97% (64.8 GiB of 66.8 GiB) in 9m 44s, read: 120.0 MiB/s, write: 118.0 MiB/s
144: 2024-02-18 11:24:53 INFO:  98% (65.5 GiB of 66.8 GiB) in 9m 50s, read: 118.7 MiB/s, write: 116.0 MiB/s
144: 2024-02-18 11:24:59 INFO:  99% (66.2 GiB of 66.8 GiB) in 9m 56s, read: 119.3 MiB/s, write: 118.7 MiB/s
144: 2024-02-18 11:25:04 INFO: 100% (66.8 GiB of 66.8 GiB) in 10m 1s, read: 116.0 MiB/s, write: 114.4 MiB/s
144: 2024-02-18 11:25:04 INFO: Waiting for server to finish backup validation...
144: 2024-02-18 11:25:08 INFO: backup was done incrementally, reused 233.74 GiB (77%)
144: 2024-02-18 11:25:08 INFO: transferred 66.81 GiB in 605 seconds (113.1 MiB/s)
144: 2024-02-18 11:25:08 INFO: adding notes to backup
144: 2024-02-18 11:25:08 INFO: Finished Backup of VM 144 (00:10:06)
 
no others jobs running on PBS ?
like, GC or Verification or Sync ?
then I would cross-tests.
on plain ext4 datastore on dedicated nvme drive.

edit: seems backups are overlapped, isn't it ?
try one VM backup alone.
What ssd model ? are enterprise ?
 
Last edited:
These are all brand-new DellEMC 4TB NVMe Enterprise P4510 drives installed in new servers. The nodes have 2 vdevs each, with each vdev being 2 x 4TB drives mirrored. The PBS has 8 of those same drives but in a RAIDZ2 configuration.

No GC or verification running at the time of these tests. There were no overlapping backups running; the 144 above was the only backup running at the time it was done.


I did some FIO testing on the PBS server and here are the results:

Code:
root@proxbackup01:~# ./FIO_Speed_Test.sh /mnt/datastore/ssd-zfs/test/
Running randwrite test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on /mnt/datastore/ssd-zfs/test/
Average Write IOPS: 8,131.29
Average Write Bandwidth (MB/s): 8,131.29
Running randread test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on /mnt/datastore/ssd-zfs/test/
Average Read IOPS: 15,008.78
Average Read Bandwidth (MB/s): 15,008.78
Running write test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on /mnt/datastore/ssd-zfs/test/
Average Write IOPS: 10,774.70
Average Write Bandwidth (MB/s): 10,774.70
Running read test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on /mnt/datastore/ssd-zfs/test/
Average Read IOPS: 16,108.98
Average Read Bandwidth (MB/s): 16,108.98
Running readwrite test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on /mnt/datastore/ssd-zfs/test/
Average Read IOPS: 7,029.15
Average Write IOPS: 7,349.30
Average Read Bandwidth (MB/s): 7,029.15
Average Write Bandwidth (MB/s): 7,349.29


And this is the same FIO test on one of the Proxmox nodes:

Code:
root@proxmox01:~# mkdir /ssdimages/test
root@proxmox01:~# ./FIO_Speed_Test.sh /ssdimages/test/
Running randwrite test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on /ssdimages/test/
Average Write IOPS: 7,237.17
Average Write Bandwidth (MB/s): 7,237.17
Running randread test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on /ssdimages/test/
Average Read IOPS: 15,683.84
Average Read Bandwidth (MB/s): 15,683.84
Running write test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on /ssdimages/test/
Average Write IOPS: 10,021.36
Average Write Bandwidth (MB/s): 10,021.36
Running read test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on /ssdimages/test/
Average Read IOPS: 16,101.69
Average Read Bandwidth (MB/s): 16,101.69
Running readwrite test with block size 1M, ioengine libaio, iodepth 16, direct 1, numjobs 5, fsync 0, using 5 files of size 1G on /ssdimages/test/
Average Read IOPS: 6,969.49
Average Write IOPS: 7,286.92
Average Read Bandwidth (MB/s): 6,969.49
Average Write Bandwidth (MB/s): 7,286.92

Here is the network test between the node that 144 is on and the PBS:

1708301328735.png
 
Last edited:
Hello, I don't have a solution to this bit old thread, but I just fired up a PBU and between two servers 10G fiber connections, 40-140 Mbit/s is the speed for backup. SAS spinning drives, R730 and T630 Dell. Iperf3 gives near 10G both directions, no any extra load on servers.
Code:
INFO:  32% (298.1 GiB of 931.5 GiB) in 1h 8m 45s, read: 83.0 MiB/s, write: 78.9 MiB/s
INFO:  33% (307.4 GiB of 931.5 GiB) in 1h 10m 17s, read: 103.2 MiB/s, write: 86.3 MiB/s
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!