PBS Performance improvements (enterprise all-flash)

tomstephens89

Renowned Member
Mar 10, 2014
196
7
83
Kingsclere, United Kingdom
I have just replaced our backup server with a Dell machine running a pair of 8 core Xeon Gold 6144's with 512GB of RAM, and 24 x 12G 7.68TB SAS SSD's in a hardware RAID 10 on a Dell PERC H740P (I have tested ZFS as well via HBA mode).

The PBS box has a pair of 40Gb NIC's in a LACP to our Nexus rack switches running vPC.

Our proxmox hosts are a pair of 24 core Xeon Platinum 8268's with 1TB of RAM, with 8 x 4TB Data Centre SATA SSD's in a hardware RAID 10 on a Dell PERC H740P.

The proxmox boxes have 2 x 10Gb NIC's in a LACP to our Nexus rack switches running vPC.

No routing between the proxmox hosts and pbs, straight layer 2 connectivity in same VLAN.

Despite this hardware, backup and restore performance isn't what I was hoping for. Netting what appears to be a consistent 200-300MB/s. See data below showing an SCP transfer able to pull 500MB/s, which obviously includes a TLS overhead, also see an iPERF able to max out the single 10Gbit NIC in any of the bonds on our hosts.

I understand the CPU bound TLS limit, but why am I not hitting this when running backups? Is it related to a chunk verify process which the benchmark shows to be around 250MB/s?

image.png

image (1).png

image (2).png

IO Delay is sub 0.1% on all hosts, pretty much 0 all the time. Yet backup performance of a new VM looks like this:


Code:
NFO: scsi0: dirty-bitmap status: created new
INFO:   0% (620.0 MiB of 80.0 GiB) in 3s, read: 206.7 MiB/s, write: 170.7 MiB/s
INFO:   1% (1.0 GiB of 80.0 GiB) in 6s, read: 142.7 MiB/s, write: 142.7 MiB/s
INFO:   2% (1.7 GiB of 80.0 GiB) in 11s, read: 136.8 MiB/s, write: 136.8 MiB/s
INFO:   3% (2.5 GiB of 80.0 GiB) in 17s, read: 138.0 MiB/s, write: 138.0 MiB/s
INFO:   4% (3.3 GiB of 80.0 GiB) in 23s, read: 132.7 MiB/s, write: 132.7 MiB/s
INFO:   5% (4.1 GiB of 80.0 GiB) in 29s, read: 138.7 MiB/s, write: 138.7 MiB/s
INFO:   6% (4.9 GiB of 80.0 GiB) in 35s, read: 130.0 MiB/s, write: 130.0 MiB/s
INFO:   7% (5.6 GiB of 80.0 GiB) in 41s, read: 132.0 MiB/s, write: 132.0 MiB/s
INFO:   8% (6.5 GiB of 80.0 GiB) in 48s, read: 123.4 MiB/s, write: 123.4 MiB/s
INFO:   9% (7.3 GiB of 80.0 GiB) in 55s, read: 125.1 MiB/s, write: 125.1 MiB/s
INFO:  10% (8.1 GiB of 80.0 GiB) in 1m 1s, read: 129.3 MiB/s, write: 129.3 MiB/s
INFO:  11% (8.8 GiB of 80.0 GiB) in 1m 7s, read: 125.3 MiB/s, write: 125.3 MiB/s
INFO:  12% (9.7 GiB of 80.0 GiB) in 1m 14s, read: 128.6 MiB/s, write: 128.6 MiB/s
INFO:  13% (10.4 GiB of 80.0 GiB) in 1m 20s, read: 126.7 MiB/s, write: 126.7 MiB/s
INFO:  14% (11.3 GiB of 80.0 GiB) in 1m 27s, read: 119.4 MiB/s, write: 119.4 MiB/s
INFO:  15% (12.1 GiB of 80.0 GiB) in 1m 33s, read: 136.7 MiB/s, write: 136.0 MiB/s
INFO:  16% (12.8 GiB of 80.0 GiB) in 1m 39s, read: 130.7 MiB/s, write: 130.7 MiB/s
INFO:  17% (13.7 GiB of 80.0 GiB) in 1m 46s, read: 127.4 MiB/s, write: 127.4 MiB/s
INFO:  18% (14.5 GiB of 80.0 GiB) in 1m 52s, read: 130.0 MiB/s, write: 130.0 MiB/s
INFO:  19% (15.3 GiB of 80.0 GiB) in 1m 58s, read: 138.7 MiB/s, write: 138.7 MiB/s
INFO:  20% (16.1 GiB of 80.0 GiB) in 2m 4s, read: 134.7 MiB/s, write: 133.3 MiB/s
INFO:  21% (16.9 GiB of 80.0 GiB) in 2m 10s, read: 141.3 MiB/s, write: 141.3 MiB/s
INFO:  22% (17.7 GiB of 80.0 GiB) in 2m 16s, read: 144.7 MiB/s, write: 144.7 MiB/s
INFO:  23% (18.4 GiB of 80.0 GiB) in 2m 21s, read: 142.4 MiB/s, write: 142.4 MiB/s
INFO:  26% (20.9 GiB of 80.0 GiB) in 2m 24s, read: 844.0 MiB/s, write: 93.3 MiB/s
INFO:  27% (21.9 GiB of 80.0 GiB) in 2m 27s, read: 342.7 MiB/s, write: 117.3 MiB/s
INFO:  28% (22.5 GiB of 80.0 GiB) in 2m 31s, read: 142.0 MiB/s, write: 142.0 MiB/s
INFO:  29% (23.3 GiB of 80.0 GiB) in 2m 38s, read: 128.0 MiB/s, write: 126.9 MiB/s
INFO:  30% (24.0 GiB of 80.0 GiB) in 2m 43s, read: 140.0 MiB/s, write: 139.2 MiB/s
INFO:  31% (24.8 GiB of 80.0 GiB) in 2m 50s, read: 115.4 MiB/s, write: 115.4 MiB/s
INFO:  32% (25.7 GiB of 80.0 GiB) in 2m 57s, read: 132.0 MiB/s, write: 132.0 MiB/s
INFO:  33% (26.5 GiB of 80.0 GiB) in 3m 3s, read: 136.0 MiB/s, write: 136.0 MiB/s
INFO:  34% (27.2 GiB of 80.0 GiB) in 3m 9s, read: 123.3 MiB/s, write: 123.3 MiB/s
INFO:  35% (28.1 GiB of 80.0 GiB) in 3m 16s, read: 124.6 MiB/s, write: 124.6 MiB/s
INFO:  36% (28.8 GiB of 80.0 GiB) in 3m 22s, read: 124.7 MiB/s, write: 124.7 MiB/s
INFO:  37% (29.7 GiB of 80.0 GiB) in 3m 28s, read: 150.7 MiB/s, write: 150.7 MiB/s
INFO:  38% (30.5 GiB of 80.0 GiB) in 3m 35s, read: 119.4 MiB/s, write: 119.4 MiB/s
INFO:  40% (32.3 GiB of 80.0 GiB) in 3m 38s, read: 598.7 MiB/s, write: 114.7 MiB/s
INFO:  41% (32.9 GiB of 80.0 GiB) in 3m 43s, read: 128.8 MiB/s, write: 128.8 MiB/s
INFO:  42% (33.7 GiB of 80.0 GiB) in 3m 48s, read: 161.6 MiB/s, write: 161.6 MiB/s
INFO:  43% (34.4 GiB of 80.0 GiB) in 3m 54s, read: 127.3 MiB/s, write: 127.3 MiB/s
INFO:  44% (35.3 GiB of 80.0 GiB) in 4m 1s, read: 121.7 MiB/s, write: 121.7 MiB/s
INFO:  45% (36.0 GiB of 80.0 GiB) in 4m 7s, read: 129.3 MiB/s, write: 129.3 MiB/s
INFO:  46% (36.8 GiB of 80.0 GiB) in 4m 13s, read: 134.7 MiB/s, write: 134.7 MiB/s
INFO:  47% (37.7 GiB of 80.0 GiB) in 4m 19s, read: 148.0 MiB/s, write: 148.0 MiB/s
INFO:  48% (38.4 GiB of 80.0 GiB) in 4m 25s, read: 127.3 MiB/s, write: 127.3 MiB/s
INFO:  49% (39.3 GiB of 80.0 GiB) in 4m 31s, read: 148.0 MiB/s, write: 148.0 MiB/s
INFO:  50% (40.0 GiB of 80.0 GiB) in 4m 37s, read: 124.0 MiB/s, write: 124.0 MiB/s
INFO:  51% (40.9 GiB of 80.0 GiB) in 4m 41s, read: 220.0 MiB/s, write: 119.0 MiB/s
INFO:  52% (41.7 GiB of 80.0 GiB) in 4m 47s, read: 144.7 MiB/s, write: 140.0 MiB/s
INFO:  53% (42.5 GiB of 80.0 GiB) in 4m 52s, read: 150.4 MiB/s, write: 150.4 MiB/s
INFO:  54% (43.3 GiB of 80.0 GiB) in 4m 59s, read: 122.9 MiB/s, write: 122.9 MiB/s
INFO:  55% (44.0 GiB of 80.0 GiB) in 5m 5s, read: 128.0 MiB/s, write: 128.0 MiB/s
INFO:  56% (44.9 GiB of 80.0 GiB) in 5m 11s, read: 143.3 MiB/s, write: 142.0 MiB/s
INFO:  57% (45.6 GiB of 80.0 GiB) in 5m 17s, read: 123.3 MiB/s, write: 123.3 MiB/s
INFO:  58% (46.5 GiB of 80.0 GiB) in 5m 24s, read: 128.0 MiB/s, write: 128.0 MiB/s
INFO:  59% (47.3 GiB of 80.0 GiB) in 5m 30s, read: 140.0 MiB/s, write: 132.7 MiB/s
INFO:  60% (48.2 GiB of 80.0 GiB) in 5m 36s, read: 145.3 MiB/s, write: 124.7 MiB/s
INFO:  61% (48.9 GiB of 80.0 GiB) in 5m 41s, read: 145.6 MiB/s, write: 144.8 MiB/s
INFO:  62% (49.7 GiB of 80.0 GiB) in 5m 48s, read: 121.7 MiB/s, write: 121.7 MiB/s
INFO:  63% (50.4 GiB of 80.0 GiB) in 5m 54s, read: 122.0 MiB/s, write: 122.0 MiB/s
INFO:  64% (51.3 GiB of 80.0 GiB) in 6m 1s, read: 124.0 MiB/s, write: 124.0 MiB/s
INFO:  65% (52.0 GiB of 80.0 GiB) in 6m 7s, read: 127.3 MiB/s, write: 127.3 MiB/s
INFO:  66% (52.9 GiB of 80.0 GiB) in 6m 14s, read: 125.1 MiB/s, write: 125.1 MiB/s
INFO:  67% (53.7 GiB of 80.0 GiB) in 6m 20s, read: 144.7 MiB/s, write: 144.7 MiB/s
INFO:  68% (54.6 GiB of 80.0 GiB) in 6m 24s, read: 221.0 MiB/s, write: 165.0 MiB/s
INFO:  69% (55.2 GiB of 80.0 GiB) in 6m 29s, read: 132.8 MiB/s, write: 132.8 MiB/s
INFO:  73% (58.7 GiB of 80.0 GiB) in 6m 34s, read: 708.8 MiB/s, write: 133.6 MiB/s
INFO:  89% (71.3 GiB of 80.0 GiB) in 6m 37s, read: 4.2 GiB/s, write: 0 B/s
INFO: 100% (80.0 GiB of 80.0 GiB) in 6m 40s, read: 2.9 GiB/s, write: 1.3 MiB/s
INFO: Waiting for server to finish backup validation...
INFO: backup is sparse: 29.32 GiB (36%) total zero data
INFO: backup was done incrementally, reused 29.35 GiB (36%)
INFO: transferred 80.00 GiB in 401 seconds (204.3 MiB/s)

Why so slow? Where can I look for a bottleneck?

Here is a comprehensive set of disk benchmarks on the proxmox nodes:

/dev/sda3 is the main RAID10 on the PVE nodes:

Code:
1) fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4k --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
2) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
3) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=8 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
4) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=64 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
5) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=256 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
6) fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=1M --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
7) fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4M --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
8) fio --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
9) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
10) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=8 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
11) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=64 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
12) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=256 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
13) fio --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=1M --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
14) fio --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=4M --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
15) fio --ioengine=libaio --direct=1 --sync=1 --randrepeat=1 --rw=randrw --rwmixread=75 --bs=4k --iodepth=64 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3


1) [r=147MiB/s][r=37.7k IOPS]
2) [r=97.0MiB/s][r=24.8k IOPS]
3) [r=297MiB/s][r=76.1k IOPS]
4) [r=482MiB/s][r=123k IOPS]
5) [r=507MiB/s][r=130k IOPS]
6) [r=2294MiB/s][r=2294 IOPS]
7) [r=1688MiB/s][r=422 IOPS]
8) [w=144MiB/s][w=36.8k IOPS]
9) [w=78.3MiB/s][w=20.0k IOPS]
10) [w=129MiB/s][w=33.0k IOPS]
11) [w=142MiB/s][w=36.5k IOPS]
12) [w=141MiB/s][w=36.0k IOPS]
13) [w=2017MiB/s][w=2017 IOPS]
14) [w=2016MiB/s][w=504 IOPS]
15) [r=284MiB/s,w=94.6MiB/s][r=72.6k,w=24.2k IOPS]
 
Last edited:
All my investigation so far appears to indicate that the chunk verify performance as shown by the benchmark could be the limiting factor here.

If I do large datastore level verify jobs on a schedule, is a verify on backup job needed and how can it be disabled? Why does the client need to do a verify if the backup server itself can do this on schedule?
 
Further testing reveals no disk bottleneck at source, no disk bottleneck at PBS, iperf able to saturate network, SCP able to exceed 500MB/s....

Yet backups run an average of 200MB/s and restores are much much slower.

Anyone care to shine any light on this? My hardware and network is good but performance is terrible. Why?
 
Can you provide the output of the following commands?

PVE:
Code:
pveversion -v
lsblk
cat /etc/pve/storage.cfg

PBS:
Code:
proxmox-backup-manager versions --verbose
lsblk
cat /etc/proxmox-backup/datastore.cfg
cat /etc/proxmox-backup/verification.cfg

It looks like the source (PVE) is not the limiting factor here:
Code:
INFO:  23% (18.4 GiB of 80.0 GiB) in 2m 21s, read: 142.4 MiB/s, write: 142.4 MiB/s
INFO:  26% (20.9 GiB of 80.0 GiB) in 2m 24s, read: 844.0 MiB/s, write: 93.3 MiB/s
INFO:  27% (21.9 GiB of 80.0 GiB) in 2m 27s, read: 342.7 MiB/s, write: 117.3 MiB/s
When data is sparse and it doesn't have to write much data, the read speed increases a lot.

Have you benchmarked the storage on the PBS as well?
Please note that `fio` is destructive, don't specify a disk directly, but rather a file on the filesystem.
 
Can you provide the output of the following commands?

PVE:
Code:
pveversion -v
lsblk
cat /etc/pve/storage.cfg

PBS:
Code:
proxmox-backup-manager versions --verbose
lsblk
cat /etc/proxmox-backup/datastore.cfg
cat /etc/proxmox-backup/verification.cfg

It looks like the source (PVE) is not the limiting factor here:
Code:
INFO:  23% (18.4 GiB of 80.0 GiB) in 2m 21s, read: 142.4 MiB/s, write: 142.4 MiB/s
INFO:  26% (20.9 GiB of 80.0 GiB) in 2m 24s, read: 844.0 MiB/s, write: 93.3 MiB/s
INFO:  27% (21.9 GiB of 80.0 GiB) in 2m 27s, read: 342.7 MiB/s, write: 117.3 MiB/s
When data is sparse and it doesn't have to write much data, the read speed increases a lot.

Have you benchmarked the storage on the PBS as well?
Please note that `fio` is destructive, don't specify a disk directly, but rather a file on the filesystem.

Latest builds all round, the PBS was installed 2 days ago and the PVE hosts a week.

Yes I am aware of the sparse backups but when its got actual data to copy, it maxes out around 200MB/s. Frequently less.

Benchmarks on the PBS storage with fio are even better. Its a 24 SSD RAID 10 array. Hardware, no ZFS. I have also edited my fio tests for file rather than disks. Learnt the hard way after destroying the source node after those benchmarks.

PVE:

Code:
root@jupiter:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.8-2
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
openvswitch-switch: 3.1.0-2+deb12u1
proxmox-backup-client: 3.2.4-1
proxmox-backup-file-restore: 3.2.4-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1

Code:
root@jupiter:~# lsblk
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0   14T  0 disk
├─sda1                         8:1    0 1007K  0 part
├─sda2                         8:2    0    1G  0 part /boot/efi
└─sda3                         8:3    0   14T  0 part
  ├─pve-swap                 252:0    0    8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0   96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0 15.9G  0 lvm 
  │ └─pve-data-tpool         252:4    0 13.8T  0 lvm 
  │   ├─pve-data             252:5    0 13.8T  1 lvm 
  │   ├─pve-vm--100--disk--0 252:6    0    4M  0 lvm 
  │   ├─pve-vm--100--disk--1 252:7    0  100G  0 lvm 
  │   ├─pve-vm--100--disk--2 252:8    0    4M  0 lvm 
  │   ├─pve-vm--101--disk--0 252:9    0  120G  0 lvm 
  │   └─pve-vm--101--disk--1 252:10   0    4M  0 lvm 
  └─pve-data_tdata           252:3    0 13.8T  0 lvm 
    └─pve-data-tpool         252:4    0 13.8T  0 lvm 
      ├─pve-data             252:5    0 13.8T  1 lvm 
      ├─pve-vm--100--disk--0 252:6    0    4M  0 lvm 
      ├─pve-vm--100--disk--1 252:7    0  100G  0 lvm 
      ├─pve-vm--100--disk--2 252:8    0    4M  0 lvm 
      ├─pve-vm--101--disk--0 252:9    0  120G  0 lvm 
      └─pve-vm--101--disk--1 252:10   0    4M  0 lvm 
sr0                           11:0    1 1024M  0 rom

Code:
root@jupiter:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

pbs: pbs-ssd-onhand
        datastore rdg-pbs-primary
        server 10.226.10.10
        content backup
        fingerprint 21:ce:cf:10:64:d5:5f:95:30:c8:43:7b:9a:e1:f9:d4:70:48:6c:82:f9:93:19:31:a7:b5:b7:c8:72:2f:03:86
        namespace onhand
        prune-backups keep-all=1
        username root@pam

pbs: pbs-ssd-testing
        datastore rdg-pbs-primary
        server 10.226.10.10
        content backup
        fingerprint 21:ce:cf:10:64:d5:5f:95:30:c8:43:7b:9a:e1:f9:d4:70:48:6c:82:f9:93:19:31:a7:b5:b7:c8:72:2f:03:86
        namespace testing
        prune-backups keep-all=1
        username root@pam

PBS:

Code:
root@pbs-primary:~# proxmox-backup-manager versions --verbose
proxmox-backup                    3.2.0        running kernel: 6.8.4-2-pve
proxmox-backup-server             3.2.6-1      running version: 3.2.2     
proxmox-kernel-helper             8.1.0                                   
proxmox-kernel-6.8                6.8.8-2                                 
proxmox-kernel-6.8.4-2-pve-signed 6.8.4-2                                 
ifupdown2                         3.2.0-1+pmx8                           
libjs-extjs                       7.0.0-4                                 
proxmox-backup-docs               3.2.6-1                                 
proxmox-backup-client             3.2.6-1                                 
proxmox-mail-forward              0.2.3                                   
proxmox-mini-journalreader        1.4.0                                   
proxmox-offline-mirror-helper     0.6.6                                   
proxmox-widget-toolkit            4.2.3                                   
pve-xtermjs                       5.3.0-3                                 
smartmontools                     7.3-pve1                               
zfsutils-linux                    2.2.4-pve1

Code:
root@pbs-primary:~# lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda            8:0    0 465.3G  0 disk
├─sda1         8:1    0  1007K  0 part
├─sda2         8:2    0     1G  0 part /boot/efi
└─sda3         8:3    0 464.2G  0 part
  ├─pbs-swap 252:0    0     8G  0 lvm  [SWAP]
  └─pbs-root 252:1    0 440.2G  0 lvm  /
sdb            8:16   0  83.8T  0 disk
└─sdb1         8:17   0  83.8T  0 part /mnt/datastore/rdg-pbs-primary

Code:
root@pbs-primary:~# cat /etc/proxmox-backup/datastore.cfg
datastore: rdg-pbs-primary
        gc-schedule mon 18:15
        path /mnt/datastore/rdg-pbs-primary

Code:
root@pbs-primary:~# cat /etc/proxmox-backup/verification.cfg
verification: v-c04f2496-b1f2
        ignore-verified true
        ns
        outdated-after 30
        schedule 03:00
        store rdg-pbs-primary
 
Last edited:
Thank you for the additional information!

The names used for the hosts here seem to be different than in the screenshots:
Code:
mercury -> jupiter
ssdpbs -> pbs-primary

Which PVE host was used for the backup (backup task log above)?
Which IP does that host use?

The write speeds in the backup task log could hint at a network bottleneck still, since it would match a 1Gbit/s network, rather than a 10Gbit/s one.
 
We have rebuilt the PBS a couple of times during testing and also used more than one node for testing. The PVE's are all the same.

To clarify, Jupiter is now running standalone as a PVE host. pbs-primary is the SSD backed PBS as above. Mercury is one of our production cluster nodes.

Network on the hosts is definateley 10G (LACP 2 x 10G), PBS is 40G (LAC 2 x 40G).

iPerf and SCP demonstrates this, we are not accidentially seeing a 1Gbit link anywhere.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!