PBS Performance improvements (enterprise all-flash)

tomstephens89

Renowned Member
Mar 10, 2014
202
8
83
Kingsclere, United Kingdom
I have just replaced our backup server with a Dell machine running a pair of 8 core Xeon Gold 6144's with 512GB of RAM, and 24 x 12G 7.68TB SAS SSD's in a hardware RAID 10 on a Dell PERC H740P (I have tested ZFS as well via HBA mode).

The PBS box has a pair of 40Gb NIC's in a LACP to our Nexus rack switches running vPC.

Our proxmox hosts are a pair of 24 core Xeon Platinum 8268's with 1TB of RAM, with 8 x 4TB Data Centre SATA SSD's in a hardware RAID 10 on a Dell PERC H740P.

The proxmox boxes have 2 x 10Gb NIC's in a LACP to our Nexus rack switches running vPC.

No routing between the proxmox hosts and pbs, straight layer 2 connectivity in same VLAN.

Despite this hardware, backup and restore performance isn't what I was hoping for. Netting what appears to be a consistent 200-300MB/s. See data below showing an SCP transfer able to pull 500MB/s, which obviously includes a TLS overhead, also see an iPERF able to max out the single 10Gbit NIC in any of the bonds on our hosts.

I understand the CPU bound TLS limit, but why am I not hitting this when running backups? Is it related to a chunk verify process which the benchmark shows to be around 250MB/s?

image.png

image (1).png

image (2).png

IO Delay is sub 0.1% on all hosts, pretty much 0 all the time. Yet backup performance of a new VM looks like this:


Code:
NFO: scsi0: dirty-bitmap status: created new
INFO:   0% (620.0 MiB of 80.0 GiB) in 3s, read: 206.7 MiB/s, write: 170.7 MiB/s
INFO:   1% (1.0 GiB of 80.0 GiB) in 6s, read: 142.7 MiB/s, write: 142.7 MiB/s
INFO:   2% (1.7 GiB of 80.0 GiB) in 11s, read: 136.8 MiB/s, write: 136.8 MiB/s
INFO:   3% (2.5 GiB of 80.0 GiB) in 17s, read: 138.0 MiB/s, write: 138.0 MiB/s
INFO:   4% (3.3 GiB of 80.0 GiB) in 23s, read: 132.7 MiB/s, write: 132.7 MiB/s
INFO:   5% (4.1 GiB of 80.0 GiB) in 29s, read: 138.7 MiB/s, write: 138.7 MiB/s
INFO:   6% (4.9 GiB of 80.0 GiB) in 35s, read: 130.0 MiB/s, write: 130.0 MiB/s
INFO:   7% (5.6 GiB of 80.0 GiB) in 41s, read: 132.0 MiB/s, write: 132.0 MiB/s
INFO:   8% (6.5 GiB of 80.0 GiB) in 48s, read: 123.4 MiB/s, write: 123.4 MiB/s
INFO:   9% (7.3 GiB of 80.0 GiB) in 55s, read: 125.1 MiB/s, write: 125.1 MiB/s
INFO:  10% (8.1 GiB of 80.0 GiB) in 1m 1s, read: 129.3 MiB/s, write: 129.3 MiB/s
INFO:  11% (8.8 GiB of 80.0 GiB) in 1m 7s, read: 125.3 MiB/s, write: 125.3 MiB/s
INFO:  12% (9.7 GiB of 80.0 GiB) in 1m 14s, read: 128.6 MiB/s, write: 128.6 MiB/s
INFO:  13% (10.4 GiB of 80.0 GiB) in 1m 20s, read: 126.7 MiB/s, write: 126.7 MiB/s
INFO:  14% (11.3 GiB of 80.0 GiB) in 1m 27s, read: 119.4 MiB/s, write: 119.4 MiB/s
INFO:  15% (12.1 GiB of 80.0 GiB) in 1m 33s, read: 136.7 MiB/s, write: 136.0 MiB/s
INFO:  16% (12.8 GiB of 80.0 GiB) in 1m 39s, read: 130.7 MiB/s, write: 130.7 MiB/s
INFO:  17% (13.7 GiB of 80.0 GiB) in 1m 46s, read: 127.4 MiB/s, write: 127.4 MiB/s
INFO:  18% (14.5 GiB of 80.0 GiB) in 1m 52s, read: 130.0 MiB/s, write: 130.0 MiB/s
INFO:  19% (15.3 GiB of 80.0 GiB) in 1m 58s, read: 138.7 MiB/s, write: 138.7 MiB/s
INFO:  20% (16.1 GiB of 80.0 GiB) in 2m 4s, read: 134.7 MiB/s, write: 133.3 MiB/s
INFO:  21% (16.9 GiB of 80.0 GiB) in 2m 10s, read: 141.3 MiB/s, write: 141.3 MiB/s
INFO:  22% (17.7 GiB of 80.0 GiB) in 2m 16s, read: 144.7 MiB/s, write: 144.7 MiB/s
INFO:  23% (18.4 GiB of 80.0 GiB) in 2m 21s, read: 142.4 MiB/s, write: 142.4 MiB/s
INFO:  26% (20.9 GiB of 80.0 GiB) in 2m 24s, read: 844.0 MiB/s, write: 93.3 MiB/s
INFO:  27% (21.9 GiB of 80.0 GiB) in 2m 27s, read: 342.7 MiB/s, write: 117.3 MiB/s
INFO:  28% (22.5 GiB of 80.0 GiB) in 2m 31s, read: 142.0 MiB/s, write: 142.0 MiB/s
INFO:  29% (23.3 GiB of 80.0 GiB) in 2m 38s, read: 128.0 MiB/s, write: 126.9 MiB/s
INFO:  30% (24.0 GiB of 80.0 GiB) in 2m 43s, read: 140.0 MiB/s, write: 139.2 MiB/s
INFO:  31% (24.8 GiB of 80.0 GiB) in 2m 50s, read: 115.4 MiB/s, write: 115.4 MiB/s
INFO:  32% (25.7 GiB of 80.0 GiB) in 2m 57s, read: 132.0 MiB/s, write: 132.0 MiB/s
INFO:  33% (26.5 GiB of 80.0 GiB) in 3m 3s, read: 136.0 MiB/s, write: 136.0 MiB/s
INFO:  34% (27.2 GiB of 80.0 GiB) in 3m 9s, read: 123.3 MiB/s, write: 123.3 MiB/s
INFO:  35% (28.1 GiB of 80.0 GiB) in 3m 16s, read: 124.6 MiB/s, write: 124.6 MiB/s
INFO:  36% (28.8 GiB of 80.0 GiB) in 3m 22s, read: 124.7 MiB/s, write: 124.7 MiB/s
INFO:  37% (29.7 GiB of 80.0 GiB) in 3m 28s, read: 150.7 MiB/s, write: 150.7 MiB/s
INFO:  38% (30.5 GiB of 80.0 GiB) in 3m 35s, read: 119.4 MiB/s, write: 119.4 MiB/s
INFO:  40% (32.3 GiB of 80.0 GiB) in 3m 38s, read: 598.7 MiB/s, write: 114.7 MiB/s
INFO:  41% (32.9 GiB of 80.0 GiB) in 3m 43s, read: 128.8 MiB/s, write: 128.8 MiB/s
INFO:  42% (33.7 GiB of 80.0 GiB) in 3m 48s, read: 161.6 MiB/s, write: 161.6 MiB/s
INFO:  43% (34.4 GiB of 80.0 GiB) in 3m 54s, read: 127.3 MiB/s, write: 127.3 MiB/s
INFO:  44% (35.3 GiB of 80.0 GiB) in 4m 1s, read: 121.7 MiB/s, write: 121.7 MiB/s
INFO:  45% (36.0 GiB of 80.0 GiB) in 4m 7s, read: 129.3 MiB/s, write: 129.3 MiB/s
INFO:  46% (36.8 GiB of 80.0 GiB) in 4m 13s, read: 134.7 MiB/s, write: 134.7 MiB/s
INFO:  47% (37.7 GiB of 80.0 GiB) in 4m 19s, read: 148.0 MiB/s, write: 148.0 MiB/s
INFO:  48% (38.4 GiB of 80.0 GiB) in 4m 25s, read: 127.3 MiB/s, write: 127.3 MiB/s
INFO:  49% (39.3 GiB of 80.0 GiB) in 4m 31s, read: 148.0 MiB/s, write: 148.0 MiB/s
INFO:  50% (40.0 GiB of 80.0 GiB) in 4m 37s, read: 124.0 MiB/s, write: 124.0 MiB/s
INFO:  51% (40.9 GiB of 80.0 GiB) in 4m 41s, read: 220.0 MiB/s, write: 119.0 MiB/s
INFO:  52% (41.7 GiB of 80.0 GiB) in 4m 47s, read: 144.7 MiB/s, write: 140.0 MiB/s
INFO:  53% (42.5 GiB of 80.0 GiB) in 4m 52s, read: 150.4 MiB/s, write: 150.4 MiB/s
INFO:  54% (43.3 GiB of 80.0 GiB) in 4m 59s, read: 122.9 MiB/s, write: 122.9 MiB/s
INFO:  55% (44.0 GiB of 80.0 GiB) in 5m 5s, read: 128.0 MiB/s, write: 128.0 MiB/s
INFO:  56% (44.9 GiB of 80.0 GiB) in 5m 11s, read: 143.3 MiB/s, write: 142.0 MiB/s
INFO:  57% (45.6 GiB of 80.0 GiB) in 5m 17s, read: 123.3 MiB/s, write: 123.3 MiB/s
INFO:  58% (46.5 GiB of 80.0 GiB) in 5m 24s, read: 128.0 MiB/s, write: 128.0 MiB/s
INFO:  59% (47.3 GiB of 80.0 GiB) in 5m 30s, read: 140.0 MiB/s, write: 132.7 MiB/s
INFO:  60% (48.2 GiB of 80.0 GiB) in 5m 36s, read: 145.3 MiB/s, write: 124.7 MiB/s
INFO:  61% (48.9 GiB of 80.0 GiB) in 5m 41s, read: 145.6 MiB/s, write: 144.8 MiB/s
INFO:  62% (49.7 GiB of 80.0 GiB) in 5m 48s, read: 121.7 MiB/s, write: 121.7 MiB/s
INFO:  63% (50.4 GiB of 80.0 GiB) in 5m 54s, read: 122.0 MiB/s, write: 122.0 MiB/s
INFO:  64% (51.3 GiB of 80.0 GiB) in 6m 1s, read: 124.0 MiB/s, write: 124.0 MiB/s
INFO:  65% (52.0 GiB of 80.0 GiB) in 6m 7s, read: 127.3 MiB/s, write: 127.3 MiB/s
INFO:  66% (52.9 GiB of 80.0 GiB) in 6m 14s, read: 125.1 MiB/s, write: 125.1 MiB/s
INFO:  67% (53.7 GiB of 80.0 GiB) in 6m 20s, read: 144.7 MiB/s, write: 144.7 MiB/s
INFO:  68% (54.6 GiB of 80.0 GiB) in 6m 24s, read: 221.0 MiB/s, write: 165.0 MiB/s
INFO:  69% (55.2 GiB of 80.0 GiB) in 6m 29s, read: 132.8 MiB/s, write: 132.8 MiB/s
INFO:  73% (58.7 GiB of 80.0 GiB) in 6m 34s, read: 708.8 MiB/s, write: 133.6 MiB/s
INFO:  89% (71.3 GiB of 80.0 GiB) in 6m 37s, read: 4.2 GiB/s, write: 0 B/s
INFO: 100% (80.0 GiB of 80.0 GiB) in 6m 40s, read: 2.9 GiB/s, write: 1.3 MiB/s
INFO: Waiting for server to finish backup validation...
INFO: backup is sparse: 29.32 GiB (36%) total zero data
INFO: backup was done incrementally, reused 29.35 GiB (36%)
INFO: transferred 80.00 GiB in 401 seconds (204.3 MiB/s)

Why so slow? Where can I look for a bottleneck?

Here is a comprehensive set of disk benchmarks on the proxmox nodes:

/dev/sda3 is the main RAID10 on the PVE nodes:

Code:
1) fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4k --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
2) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
3) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=8 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
4) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=64 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
5) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=256 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
6) fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=1M --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
7) fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4M --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
8) fio --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
9) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
10) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=8 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
11) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=64 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
12) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=256 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
13) fio --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=1M --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
14) fio --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=4M --numjobs=1 --iodepth=1 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3
15) fio --ioengine=libaio --direct=1 --sync=1 --randrepeat=1 --rw=randrw --rwmixread=75 --bs=4k --iodepth=64 --runtime=30 --time_based --buffered=0 --name XXX --filename=/dev/sda3


1) [r=147MiB/s][r=37.7k IOPS]
2) [r=97.0MiB/s][r=24.8k IOPS]
3) [r=297MiB/s][r=76.1k IOPS]
4) [r=482MiB/s][r=123k IOPS]
5) [r=507MiB/s][r=130k IOPS]
6) [r=2294MiB/s][r=2294 IOPS]
7) [r=1688MiB/s][r=422 IOPS]
8) [w=144MiB/s][w=36.8k IOPS]
9) [w=78.3MiB/s][w=20.0k IOPS]
10) [w=129MiB/s][w=33.0k IOPS]
11) [w=142MiB/s][w=36.5k IOPS]
12) [w=141MiB/s][w=36.0k IOPS]
13) [w=2017MiB/s][w=2017 IOPS]
14) [w=2016MiB/s][w=504 IOPS]
15) [r=284MiB/s,w=94.6MiB/s][r=72.6k,w=24.2k IOPS]
 
Last edited:
All my investigation so far appears to indicate that the chunk verify performance as shown by the benchmark could be the limiting factor here.

If I do large datastore level verify jobs on a schedule, is a verify on backup job needed and how can it be disabled? Why does the client need to do a verify if the backup server itself can do this on schedule?
 
Further testing reveals no disk bottleneck at source, no disk bottleneck at PBS, iperf able to saturate network, SCP able to exceed 500MB/s....

Yet backups run an average of 200MB/s and restores are much much slower.

Anyone care to shine any light on this? My hardware and network is good but performance is terrible. Why?
 
Can you provide the output of the following commands?

PVE:
Code:
pveversion -v
lsblk
cat /etc/pve/storage.cfg

PBS:
Code:
proxmox-backup-manager versions --verbose
lsblk
cat /etc/proxmox-backup/datastore.cfg
cat /etc/proxmox-backup/verification.cfg

It looks like the source (PVE) is not the limiting factor here:
Code:
INFO:  23% (18.4 GiB of 80.0 GiB) in 2m 21s, read: 142.4 MiB/s, write: 142.4 MiB/s
INFO:  26% (20.9 GiB of 80.0 GiB) in 2m 24s, read: 844.0 MiB/s, write: 93.3 MiB/s
INFO:  27% (21.9 GiB of 80.0 GiB) in 2m 27s, read: 342.7 MiB/s, write: 117.3 MiB/s
When data is sparse and it doesn't have to write much data, the read speed increases a lot.

Have you benchmarked the storage on the PBS as well?
Please note that `fio` is destructive, don't specify a disk directly, but rather a file on the filesystem.
 
Can you provide the output of the following commands?

PVE:
Code:
pveversion -v
lsblk
cat /etc/pve/storage.cfg

PBS:
Code:
proxmox-backup-manager versions --verbose
lsblk
cat /etc/proxmox-backup/datastore.cfg
cat /etc/proxmox-backup/verification.cfg

It looks like the source (PVE) is not the limiting factor here:
Code:
INFO:  23% (18.4 GiB of 80.0 GiB) in 2m 21s, read: 142.4 MiB/s, write: 142.4 MiB/s
INFO:  26% (20.9 GiB of 80.0 GiB) in 2m 24s, read: 844.0 MiB/s, write: 93.3 MiB/s
INFO:  27% (21.9 GiB of 80.0 GiB) in 2m 27s, read: 342.7 MiB/s, write: 117.3 MiB/s
When data is sparse and it doesn't have to write much data, the read speed increases a lot.

Have you benchmarked the storage on the PBS as well?
Please note that `fio` is destructive, don't specify a disk directly, but rather a file on the filesystem.

Latest builds all round, the PBS was installed 2 days ago and the PVE hosts a week.

Yes I am aware of the sparse backups but when its got actual data to copy, it maxes out around 200MB/s. Frequently less.

Benchmarks on the PBS storage with fio are even better. Its a 24 SSD RAID 10 array. Hardware, no ZFS. I have also edited my fio tests for file rather than disks. Learnt the hard way after destroying the source node after those benchmarks.

PVE:

Code:
root@jupiter:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.8-2
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
openvswitch-switch: 3.1.0-2+deb12u1
proxmox-backup-client: 3.2.4-1
proxmox-backup-file-restore: 3.2.4-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1

Code:
root@jupiter:~# lsblk
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0   14T  0 disk
├─sda1                         8:1    0 1007K  0 part
├─sda2                         8:2    0    1G  0 part /boot/efi
└─sda3                         8:3    0   14T  0 part
  ├─pve-swap                 252:0    0    8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0   96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0 15.9G  0 lvm 
  │ └─pve-data-tpool         252:4    0 13.8T  0 lvm 
  │   ├─pve-data             252:5    0 13.8T  1 lvm 
  │   ├─pve-vm--100--disk--0 252:6    0    4M  0 lvm 
  │   ├─pve-vm--100--disk--1 252:7    0  100G  0 lvm 
  │   ├─pve-vm--100--disk--2 252:8    0    4M  0 lvm 
  │   ├─pve-vm--101--disk--0 252:9    0  120G  0 lvm 
  │   └─pve-vm--101--disk--1 252:10   0    4M  0 lvm 
  └─pve-data_tdata           252:3    0 13.8T  0 lvm 
    └─pve-data-tpool         252:4    0 13.8T  0 lvm 
      ├─pve-data             252:5    0 13.8T  1 lvm 
      ├─pve-vm--100--disk--0 252:6    0    4M  0 lvm 
      ├─pve-vm--100--disk--1 252:7    0  100G  0 lvm 
      ├─pve-vm--100--disk--2 252:8    0    4M  0 lvm 
      ├─pve-vm--101--disk--0 252:9    0  120G  0 lvm 
      └─pve-vm--101--disk--1 252:10   0    4M  0 lvm 
sr0                           11:0    1 1024M  0 rom

Code:
root@jupiter:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

pbs: pbs-ssd-onhand
        datastore rdg-pbs-primary
        server 10.226.10.10
        content backup
        fingerprint 21:ce:cf:10:64:d5:5f:95:30:c8:43:7b:9a:e1:f9:d4:70:48:6c:82:f9:93:19:31:a7:b5:b7:c8:72:2f:03:86
        namespace onhand
        prune-backups keep-all=1
        username root@pam

pbs: pbs-ssd-testing
        datastore rdg-pbs-primary
        server 10.226.10.10
        content backup
        fingerprint 21:ce:cf:10:64:d5:5f:95:30:c8:43:7b:9a:e1:f9:d4:70:48:6c:82:f9:93:19:31:a7:b5:b7:c8:72:2f:03:86
        namespace testing
        prune-backups keep-all=1
        username root@pam

PBS:

Code:
root@pbs-primary:~# proxmox-backup-manager versions --verbose
proxmox-backup                    3.2.0        running kernel: 6.8.4-2-pve
proxmox-backup-server             3.2.6-1      running version: 3.2.2     
proxmox-kernel-helper             8.1.0                                   
proxmox-kernel-6.8                6.8.8-2                                 
proxmox-kernel-6.8.4-2-pve-signed 6.8.4-2                                 
ifupdown2                         3.2.0-1+pmx8                           
libjs-extjs                       7.0.0-4                                 
proxmox-backup-docs               3.2.6-1                                 
proxmox-backup-client             3.2.6-1                                 
proxmox-mail-forward              0.2.3                                   
proxmox-mini-journalreader        1.4.0                                   
proxmox-offline-mirror-helper     0.6.6                                   
proxmox-widget-toolkit            4.2.3                                   
pve-xtermjs                       5.3.0-3                                 
smartmontools                     7.3-pve1                               
zfsutils-linux                    2.2.4-pve1

Code:
root@pbs-primary:~# lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda            8:0    0 465.3G  0 disk
├─sda1         8:1    0  1007K  0 part
├─sda2         8:2    0     1G  0 part /boot/efi
└─sda3         8:3    0 464.2G  0 part
  ├─pbs-swap 252:0    0     8G  0 lvm  [SWAP]
  └─pbs-root 252:1    0 440.2G  0 lvm  /
sdb            8:16   0  83.8T  0 disk
└─sdb1         8:17   0  83.8T  0 part /mnt/datastore/rdg-pbs-primary

Code:
root@pbs-primary:~# cat /etc/proxmox-backup/datastore.cfg
datastore: rdg-pbs-primary
        gc-schedule mon 18:15
        path /mnt/datastore/rdg-pbs-primary

Code:
root@pbs-primary:~# cat /etc/proxmox-backup/verification.cfg
verification: v-c04f2496-b1f2
        ignore-verified true
        ns
        outdated-after 30
        schedule 03:00
        store rdg-pbs-primary
 
Last edited:
Thank you for the additional information!

The names used for the hosts here seem to be different than in the screenshots:
Code:
mercury -> jupiter
ssdpbs -> pbs-primary

Which PVE host was used for the backup (backup task log above)?
Which IP does that host use?

The write speeds in the backup task log could hint at a network bottleneck still, since it would match a 1Gbit/s network, rather than a 10Gbit/s one.
 
We have rebuilt the PBS a couple of times during testing and also used more than one node for testing. The PVE's are all the same.

To clarify, Jupiter is now running standalone as a PVE host. pbs-primary is the SSD backed PBS as above. Mercury is one of our production cluster nodes.

Network on the hosts is definateley 10G (LACP 2 x 10G), PBS is 40G (LAC 2 x 40G).

iPerf and SCP demonstrates this, we are not accidentially seeing a 1Gbit link anywhere.
 
Last edited:
set recordsize to 1M on the Backupserver.
Should give you a huge performance boost.

xattr=sa (i would recommend this too)
dnodesize=auto (Only recommended if you begin with, not afterwards if there is a lot data on the pool)

I reach here with normal SAS-Drives 800MB/s - 1GB/s Backup speeds.

Everything above 1GB/s is hardlimited, due to the Compression Pipeline itself (no matter which compression or tunings)
https://bugzilla.proxmox.com/show_bug.cgi?id=5481

Cheers
 
set recordsize to 1M on the Backupserver.
Should give you a huge performance boost.

xattr=sa (i would recommend this too)
dnodesize=auto (Only recommended if you begin with, not afterwards if there is a lot data on the pool)

I reach here with normal SAS-Drives 800MB/s - 1GB/s Backup speeds.

Everything above 1GB/s is hardlimited, due to the Compression Pipeline itself (no matter which compression or tunings)
https://bugzilla.proxmox.com/show_bug.cgi?id=5481

Cheers

Have just set recordsize to 1M on my main datastore. Lucky we rebuilt the PBS yesterday to test ZFS as everything on here so far has been with hardware RAID 10.

However, even after setting this record size, there is no difference in backup performance. I can't get past 200MB/s.

See iperf is able to saturate my 2 x 10G LACP no issues.

root@jupiter:~# iperf -c 10.226.10.10 -P 2
------------------------------------------------------------
Client connecting to 10.226.10.10, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 2] local 10.226.10.21 port 51372 connected with 10.226.10.10 port 5001 (icwnd/mss/irtt=14/1448/448)
[ 1] local 10.226.10.21 port 51378 connected with 10.226.10.10 port 5001 (icwnd/mss/irtt=14/1448/417)
[ ID] Interval Transfer Bandwidth
[ 1] 0.0000-10.0073 sec 10.9 GBytes 9.36 Gbits/sec
[ 2] 0.0000-10.0075 sec 10.9 GBytes 9.34 Gbits/sec
[SUM] 0.0000-10.0008 sec 21.8 GBytes 18.7 Gbits/sec

PBS is performing like I have barely more than a 1Gbit/s connection......
 
Last edited:
Have just set recordsize to 1M on my main datastore. Lucky we rebuilt the PBS yesterday to test ZFS as everything on here so far has been with hardware RAID 10.

However, even after setting this record size, there is no difference in backup performance. I can't get past 200MB/s.

See iperf is able to saturate my 2 x 10G LACP no issues.

root@jupiter:~# iperf -c 10.226.10.10 -P 2
------------------------------------------------------------
Client connecting to 10.226.10.10, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 2] local 10.226.10.21 port 51372 connected with 10.226.10.10 port 5001 (icwnd/mss/irtt=14/1448/448)
[ 1] local 10.226.10.21 port 51378 connected with 10.226.10.10 port 5001 (icwnd/mss/irtt=14/1448/417)
[ ID] Interval Transfer Bandwidth
[ 1] 0.0000-10.0073 sec 10.9 GBytes 9.36 Gbits/sec
[ 2] 0.0000-10.0075 sec 10.9 GBytes 9.34 Gbits/sec
[SUM] 0.0000-10.0008 sec 21.8 GBytes 18.7 Gbits/sec

PBS is performing like I have barely more than a 1Gbit/s connection......
can you test with "sync mode: none" in PBS under Datastore -> Your Storage -> Options -> Tuning Options ?


Code:
INFO: starting new backup job: vzdump 150 166 135 --notes-template '{{guestname}}' --quiet 1 --prune-backups 'keep-all=1' --fleecing 0 --mode snapshot --notification-mode notification-system --storage Backup-SAS
INFO: skip external VMs: 150
INFO: Starting Backup of VM 135 (qemu)
INFO: Backup started at 2024-07-07 01:20:04
INFO: status = running
INFO: VM Name: smdb
INFO: include disk 'virtio0' 'Storage-Default:vm-135-disk-0' 30G
INFO: include disk 'virtio1' 'Storage-Default:vm-135-disk-1' 600G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/135/2024-07-06T23:20:04Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '039511c7-e862-4e9f-aadf-cd738a583a5f'
INFO: resuming VM again
INFO: virtio0: dirty-bitmap status: OK (596.0 MiB of 30.0 GiB dirty)
INFO: virtio1: dirty-bitmap status: OK (73.4 GiB of 600.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 74.0 GiB dirty of 630.0 GiB total
INFO:   3% (2.6 GiB of 74.0 GiB) in 3s, read: 881.3 MiB/s, write: 872.0 MiB/s
INFO:   6% (4.8 GiB of 74.0 GiB) in 6s, read: 749.3 MiB/s, write: 749.3 MiB/s
INFO:   9% (7.0 GiB of 74.0 GiB) in 9s, read: 745.3 MiB/s, write: 745.3 MiB/s
INFO:  12% (9.2 GiB of 74.0 GiB) in 12s, read: 757.3 MiB/s, write: 757.3 MiB/s
INFO:  15% (11.2 GiB of 74.0 GiB) in 15s, read: 685.3 MiB/s, write: 685.3 MiB/s
INFO:  17% (13.3 GiB of 74.0 GiB) in 18s, read: 714.7 MiB/s, write: 708.0 MiB/s
INFO:  20% (15.3 GiB of 74.0 GiB) in 21s, read: 676.0 MiB/s, write: 669.3 MiB/s
INFO:  23% (17.3 GiB of 74.0 GiB) in 24s, read: 702.7 MiB/s, write: 696.0 MiB/s
INFO:  26% (19.4 GiB of 74.0 GiB) in 28s, read: 533.0 MiB/s, write: 531.0 MiB/s
INFO:  29% (21.7 GiB of 74.0 GiB) in 31s, read: 789.3 MiB/s, write: 780.0 MiB/s
INFO:  32% (24.3 GiB of 74.0 GiB) in 34s, read: 870.7 MiB/s, write: 861.3 MiB/s
INFO:  36% (26.8 GiB of 74.0 GiB) in 37s, read: 865.3 MiB/s, write: 861.3 MiB/s
INFO:  39% (29.3 GiB of 74.0 GiB) in 40s, read: 868.0 MiB/s, write: 868.0 MiB/s
INFO:  43% (31.9 GiB of 74.0 GiB) in 43s, read: 889.3 MiB/s, write: 888.0 MiB/s
INFO:  45% (33.7 GiB of 74.0 GiB) in 46s, read: 606.7 MiB/s, write: 606.7 MiB/s
INFO:  49% (36.4 GiB of 74.0 GiB) in 49s, read: 904.0 MiB/s, write: 904.0 MiB/s
INFO:  52% (39.0 GiB of 74.0 GiB) in 52s, read: 886.7 MiB/s, write: 886.7 MiB/s
INFO:  56% (41.6 GiB of 74.0 GiB) in 55s, read: 894.7 MiB/s, write: 894.7 MiB/s
INFO:  59% (44.4 GiB of 74.0 GiB) in 58s, read: 956.0 MiB/s, write: 956.0 MiB/s
INFO:  63% (47.2 GiB of 74.0 GiB) in 1m 1s, read: 972.0 MiB/s, write: 972.0 MiB/s
INFO:  67% (49.6 GiB of 74.0 GiB) in 1m 4s, read: 805.3 MiB/s, write: 805.3 MiB/s
INFO:  70% (52.2 GiB of 74.0 GiB) in 1m 7s, read: 889.3 MiB/s, write: 889.3 MiB/s
INFO:  74% (54.9 GiB of 74.0 GiB) in 1m 10s, read: 921.3 MiB/s, write: 921.3 MiB/s
INFO:  77% (57.5 GiB of 74.0 GiB) in 1m 13s, read: 873.3 MiB/s, write: 873.3 MiB/s
INFO:  81% (60.0 GiB of 74.0 GiB) in 1m 16s, read: 864.0 MiB/s, write: 864.0 MiB/s
INFO:  84% (62.6 GiB of 74.0 GiB) in 1m 19s, read: 897.3 MiB/s, write: 897.3 MiB/s
INFO:  87% (64.9 GiB of 74.0 GiB) in 1m 22s, read: 792.0 MiB/s, write: 792.0 MiB/s
INFO:  90% (67.3 GiB of 74.0 GiB) in 1m 25s, read: 800.0 MiB/s, write: 800.0 MiB/s
INFO:  94% (69.6 GiB of 74.0 GiB) in 1m 28s, read: 800.0 MiB/s, write: 800.0 MiB/s
INFO:  97% (72.1 GiB of 74.0 GiB) in 1m 31s, read: 832.0 MiB/s, write: 832.0 MiB/s
INFO: 100% (74.0 GiB of 74.0 GiB) in 1m 34s, read: 668.0 MiB/s, write: 666.7 MiB/s
INFO: backup was done incrementally, reused 556.14 GiB (88%)
INFO: transferred 74.03 GiB in 94 seconds (806.4 MiB/s)
INFO: adding notes to backup
INFO: Finished Backup of VM 135 (00:01:34)
INFO: Backup finished at 2024-07-07 01:21:38
INFO: Starting Backup of VM 166 (qemu)
INFO: Backup started at 2024-07-07 01:21:38
INFO: status = running
INFO: VM Name: Datev
INFO: include disk 'virtio0' 'Storage-Default:vm-166-disk-0' 1T
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/166/2024-07-06T23:21:38Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '6b9de0e4-f62f-43ce-8239-3c90cea75adc'
INFO: resuming VM again
INFO: virtio0: dirty-bitmap status: OK (278.3 GiB of 1.0 TiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 278.3 GiB dirty of 1.0 TiB total
INFO:   1% (2.9 GiB of 278.3 GiB) in 3s, read: 994.7 MiB/s, write: 994.7 MiB/s
INFO:   2% (6.3 GiB of 278.3 GiB) in 7s, read: 865.0 MiB/s, write: 865.0 MiB/s
INFO:   3% (9.2 GiB of 278.3 GiB) in 10s, read: 982.7 MiB/s, write: 982.7 MiB/s
INFO:   4% (11.8 GiB of 278.3 GiB) in 13s, read: 904.0 MiB/s, write: 904.0 MiB/s
INFO:   5% (14.5 GiB of 278.3 GiB) in 16s, read: 904.0 MiB/s, write: 904.0 MiB/s
INFO:   8% (23.3 GiB of 278.3 GiB) in 20s, read: 2.2 GiB/s, write: 503.0 MiB/s
INFO:  18% (50.1 GiB of 278.3 GiB) in 23s, read: 8.9 GiB/s, write: 0 B/s
INFO:  27% (77.2 GiB of 278.3 GiB) in 26s, read: 9.0 GiB/s, write: 1.3 MiB/s
INFO:  37% (104.1 GiB of 278.3 GiB) in 29s, read: 9.0 GiB/s, write: 6.7 MiB/s
INFO:  47% (131.0 GiB of 278.3 GiB) in 32s, read: 9.0 GiB/s, write: 0 B/s
INFO:  56% (155.9 GiB of 278.3 GiB) in 35s, read: 8.3 GiB/s, write: 60.0 MiB/s
INFO:  57% (159.1 GiB of 278.3 GiB) in 38s, read: 1.0 GiB/s, write: 790.7 MiB/s
INFO:  58% (162.8 GiB of 278.3 GiB) in 41s, read: 1.3 GiB/s, write: 644.0 MiB/s
INFO:  59% (166.2 GiB of 278.3 GiB) in 44s, read: 1.1 GiB/s, write: 712.0 MiB/s
INFO:  60% (169.1 GiB of 278.3 GiB) in 48s, read: 739.0 MiB/s, write: 609.0 MiB/s
INFO:  62% (172.7 GiB of 278.3 GiB) in 51s, read: 1.2 GiB/s, write: 632.0 MiB/s
INFO:  63% (176.4 GiB of 278.3 GiB) in 54s, read: 1.2 GiB/s, write: 646.7 MiB/s
INFO:  64% (179.7 GiB of 278.3 GiB) in 57s, read: 1.1 GiB/s, write: 745.3 MiB/s
INFO:  65% (183.4 GiB of 278.3 GiB) in 1m, read: 1.2 GiB/s, write: 632.0 MiB/s
INFO:  67% (186.8 GiB of 278.3 GiB) in 1m 3s, read: 1.1 GiB/s, write: 628.0 MiB/s
INFO:  68% (189.8 GiB of 278.3 GiB) in 1m 6s, read: 1012.0 MiB/s, write: 680.0 MiB/s
INFO:  69% (192.3 GiB of 278.3 GiB) in 1m 9s, read: 850.7 MiB/s, write: 846.7 MiB/s
INFO:  70% (195.7 GiB of 278.3 GiB) in 1m 13s, read: 880.0 MiB/s, write: 880.0 MiB/s
INFO:  71% (198.2 GiB of 278.3 GiB) in 1m 16s, read: 856.0 MiB/s, write: 856.0 MiB/s
INFO:  72% (201.1 GiB of 278.3 GiB) in 1m 19s, read: 976.0 MiB/s, write: 869.3 MiB/s
INFO:  73% (203.8 GiB of 278.3 GiB) in 1m 22s, read: 926.7 MiB/s, write: 926.7 MiB/s
INFO:  74% (206.6 GiB of 278.3 GiB) in 1m 25s, read: 954.7 MiB/s, write: 941.3 MiB/s
INFO:  75% (209.5 GiB of 278.3 GiB) in 1m 28s, read: 1001.3 MiB/s, write: 937.3 MiB/s
INFO:  76% (212.8 GiB of 278.3 GiB) in 1m 31s, read: 1.1 GiB/s, write: 1.1 GiB/s
INFO:  77% (215.1 GiB of 278.3 GiB) in 1m 34s, read: 773.3 MiB/s, write: 694.7 MiB/s
INFO:  78% (218.1 GiB of 278.3 GiB) in 1m 37s, read: 1013.3 MiB/s, write: 613.3 MiB/s
INFO:  79% (222.2 GiB of 278.3 GiB) in 1m 40s, read: 1.4 GiB/s, write: 465.3 MiB/s
INFO:  81% (225.5 GiB of 278.3 GiB) in 1m 43s, read: 1.1 GiB/s, write: 824.0 MiB/s
INFO:  82% (228.3 GiB of 278.3 GiB) in 1m 46s, read: 956.0 MiB/s, write: 956.0 MiB/s
INFO:  83% (231.0 GiB of 278.3 GiB) in 1m 49s, read: 926.7 MiB/s, write: 918.7 MiB/s
INFO:  84% (234.8 GiB of 278.3 GiB) in 1m 52s, read: 1.3 GiB/s, write: 606.7 MiB/s
INFO:  85% (238.6 GiB of 278.3 GiB) in 1m 55s, read: 1.3 GiB/s, write: 588.0 MiB/s
INFO:  87% (242.2 GiB of 278.3 GiB) in 1m 58s, read: 1.2 GiB/s, write: 620.0 MiB/s
INFO:  88% (245.8 GiB of 278.3 GiB) in 2m 1s, read: 1.2 GiB/s, write: 616.0 MiB/s
INFO:  89% (249.5 GiB of 278.3 GiB) in 2m 4s, read: 1.2 GiB/s, write: 554.7 MiB/s
INFO:  91% (253.2 GiB of 278.3 GiB) in 2m 7s, read: 1.3 GiB/s, write: 592.0 MiB/s
INFO:  92% (256.3 GiB of 278.3 GiB) in 2m 10s, read: 1.0 GiB/s, write: 697.3 MiB/s
INFO:  93% (259.6 GiB of 278.3 GiB) in 2m 14s, read: 844.0 MiB/s, write: 840.0 MiB/s
INFO:  94% (262.0 GiB of 278.3 GiB) in 2m 17s, read: 821.3 MiB/s, write: 821.3 MiB/s
INFO:  95% (264.4 GiB of 278.3 GiB) in 2m 20s, read: 828.0 MiB/s, write: 828.0 MiB/s
INFO:  96% (267.7 GiB of 278.3 GiB) in 2m 24s, read: 847.0 MiB/s, write: 847.0 MiB/s
INFO:  97% (270.5 GiB of 278.3 GiB) in 2m 27s, read: 945.3 MiB/s, write: 844.0 MiB/s
INFO:  98% (273.1 GiB of 278.3 GiB) in 2m 30s, read: 893.3 MiB/s, write: 893.3 MiB/s
INFO:  99% (275.8 GiB of 278.3 GiB) in 2m 33s, read: 913.3 MiB/s, write: 913.3 MiB/s
INFO: 100% (278.3 GiB of 278.3 GiB) in 2m 36s, read: 846.7 MiB/s, write: 846.7 MiB/s
INFO: backup is sparse: 138.86 GiB (49%) total zero data
INFO: backup was done incrementally, reused 917.42 GiB (89%)
INFO: transferred 278.29 GiB in 156 seconds (1.8 GiB/s)
INFO: adding notes to backup
INFO: Finished Backup of VM 166 (00:02:36)
INFO: Backup finished at 2024-07-07 01:24:14
INFO: Backup job finished successfully
TASK OK

Dont Concentrate to much on your Iperf Test, i have at least 25GB in LACP on everything Server-Related here, pve/backup-server and everything else. And i cannot even reach the speeds anywhere near, not even with migration.

The PVE-Host itself does all the Backup work, the Backup-Server doesnt do much, it just receives the data.
Check if your Storage where the VM's are Located is fast enough.
If its ZFS, where your VM's are Located on, dont forget that ZVOL's are utterly slow, 20% of the speed of a dataset/zpool on nvme.
So test the disk speed inside a VM.

Cheers
 
Last edited:
can you test with "sync mode: none" in PBS under Datastore -> Your Storage -> Options -> Tuning Options ?


Code:
INFO: starting new backup job: vzdump 150 166 135 --notes-template '{{guestname}}' --quiet 1 --prune-backups 'keep-all=1' --fleecing 0 --mode snapshot --notification-mode notification-system --storage Backup-SAS
INFO: skip external VMs: 150
INFO: Starting Backup of VM 135 (qemu)
INFO: Backup started at 2024-07-07 01:20:04
INFO: status = running
INFO: VM Name: smdb
INFO: include disk 'virtio0' 'Storage-Default:vm-135-disk-0' 30G
INFO: include disk 'virtio1' 'Storage-Default:vm-135-disk-1' 600G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/135/2024-07-06T23:20:04Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '039511c7-e862-4e9f-aadf-cd738a583a5f'
INFO: resuming VM again
INFO: virtio0: dirty-bitmap status: OK (596.0 MiB of 30.0 GiB dirty)
INFO: virtio1: dirty-bitmap status: OK (73.4 GiB of 600.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 74.0 GiB dirty of 630.0 GiB total
INFO:   3% (2.6 GiB of 74.0 GiB) in 3s, read: 881.3 MiB/s, write: 872.0 MiB/s
INFO:   6% (4.8 GiB of 74.0 GiB) in 6s, read: 749.3 MiB/s, write: 749.3 MiB/s
INFO:   9% (7.0 GiB of 74.0 GiB) in 9s, read: 745.3 MiB/s, write: 745.3 MiB/s
INFO:  12% (9.2 GiB of 74.0 GiB) in 12s, read: 757.3 MiB/s, write: 757.3 MiB/s
INFO:  15% (11.2 GiB of 74.0 GiB) in 15s, read: 685.3 MiB/s, write: 685.3 MiB/s
INFO:  17% (13.3 GiB of 74.0 GiB) in 18s, read: 714.7 MiB/s, write: 708.0 MiB/s
INFO:  20% (15.3 GiB of 74.0 GiB) in 21s, read: 676.0 MiB/s, write: 669.3 MiB/s
INFO:  23% (17.3 GiB of 74.0 GiB) in 24s, read: 702.7 MiB/s, write: 696.0 MiB/s
INFO:  26% (19.4 GiB of 74.0 GiB) in 28s, read: 533.0 MiB/s, write: 531.0 MiB/s
INFO:  29% (21.7 GiB of 74.0 GiB) in 31s, read: 789.3 MiB/s, write: 780.0 MiB/s
INFO:  32% (24.3 GiB of 74.0 GiB) in 34s, read: 870.7 MiB/s, write: 861.3 MiB/s
INFO:  36% (26.8 GiB of 74.0 GiB) in 37s, read: 865.3 MiB/s, write: 861.3 MiB/s
INFO:  39% (29.3 GiB of 74.0 GiB) in 40s, read: 868.0 MiB/s, write: 868.0 MiB/s
INFO:  43% (31.9 GiB of 74.0 GiB) in 43s, read: 889.3 MiB/s, write: 888.0 MiB/s
INFO:  45% (33.7 GiB of 74.0 GiB) in 46s, read: 606.7 MiB/s, write: 606.7 MiB/s
INFO:  49% (36.4 GiB of 74.0 GiB) in 49s, read: 904.0 MiB/s, write: 904.0 MiB/s
INFO:  52% (39.0 GiB of 74.0 GiB) in 52s, read: 886.7 MiB/s, write: 886.7 MiB/s
INFO:  56% (41.6 GiB of 74.0 GiB) in 55s, read: 894.7 MiB/s, write: 894.7 MiB/s
INFO:  59% (44.4 GiB of 74.0 GiB) in 58s, read: 956.0 MiB/s, write: 956.0 MiB/s
INFO:  63% (47.2 GiB of 74.0 GiB) in 1m 1s, read: 972.0 MiB/s, write: 972.0 MiB/s
INFO:  67% (49.6 GiB of 74.0 GiB) in 1m 4s, read: 805.3 MiB/s, write: 805.3 MiB/s
INFO:  70% (52.2 GiB of 74.0 GiB) in 1m 7s, read: 889.3 MiB/s, write: 889.3 MiB/s
INFO:  74% (54.9 GiB of 74.0 GiB) in 1m 10s, read: 921.3 MiB/s, write: 921.3 MiB/s
INFO:  77% (57.5 GiB of 74.0 GiB) in 1m 13s, read: 873.3 MiB/s, write: 873.3 MiB/s
INFO:  81% (60.0 GiB of 74.0 GiB) in 1m 16s, read: 864.0 MiB/s, write: 864.0 MiB/s
INFO:  84% (62.6 GiB of 74.0 GiB) in 1m 19s, read: 897.3 MiB/s, write: 897.3 MiB/s
INFO:  87% (64.9 GiB of 74.0 GiB) in 1m 22s, read: 792.0 MiB/s, write: 792.0 MiB/s
INFO:  90% (67.3 GiB of 74.0 GiB) in 1m 25s, read: 800.0 MiB/s, write: 800.0 MiB/s
INFO:  94% (69.6 GiB of 74.0 GiB) in 1m 28s, read: 800.0 MiB/s, write: 800.0 MiB/s
INFO:  97% (72.1 GiB of 74.0 GiB) in 1m 31s, read: 832.0 MiB/s, write: 832.0 MiB/s
INFO: 100% (74.0 GiB of 74.0 GiB) in 1m 34s, read: 668.0 MiB/s, write: 666.7 MiB/s
INFO: backup was done incrementally, reused 556.14 GiB (88%)
INFO: transferred 74.03 GiB in 94 seconds (806.4 MiB/s)
INFO: adding notes to backup
INFO: Finished Backup of VM 135 (00:01:34)
INFO: Backup finished at 2024-07-07 01:21:38
INFO: Starting Backup of VM 166 (qemu)
INFO: Backup started at 2024-07-07 01:21:38
INFO: status = running
INFO: VM Name: Datev
INFO: include disk 'virtio0' 'Storage-Default:vm-166-disk-0' 1T
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/166/2024-07-06T23:21:38Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '6b9de0e4-f62f-43ce-8239-3c90cea75adc'
INFO: resuming VM again
INFO: virtio0: dirty-bitmap status: OK (278.3 GiB of 1.0 TiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 278.3 GiB dirty of 1.0 TiB total
INFO:   1% (2.9 GiB of 278.3 GiB) in 3s, read: 994.7 MiB/s, write: 994.7 MiB/s
INFO:   2% (6.3 GiB of 278.3 GiB) in 7s, read: 865.0 MiB/s, write: 865.0 MiB/s
INFO:   3% (9.2 GiB of 278.3 GiB) in 10s, read: 982.7 MiB/s, write: 982.7 MiB/s
INFO:   4% (11.8 GiB of 278.3 GiB) in 13s, read: 904.0 MiB/s, write: 904.0 MiB/s
INFO:   5% (14.5 GiB of 278.3 GiB) in 16s, read: 904.0 MiB/s, write: 904.0 MiB/s
INFO:   8% (23.3 GiB of 278.3 GiB) in 20s, read: 2.2 GiB/s, write: 503.0 MiB/s
INFO:  18% (50.1 GiB of 278.3 GiB) in 23s, read: 8.9 GiB/s, write: 0 B/s
INFO:  27% (77.2 GiB of 278.3 GiB) in 26s, read: 9.0 GiB/s, write: 1.3 MiB/s
INFO:  37% (104.1 GiB of 278.3 GiB) in 29s, read: 9.0 GiB/s, write: 6.7 MiB/s
INFO:  47% (131.0 GiB of 278.3 GiB) in 32s, read: 9.0 GiB/s, write: 0 B/s
INFO:  56% (155.9 GiB of 278.3 GiB) in 35s, read: 8.3 GiB/s, write: 60.0 MiB/s
INFO:  57% (159.1 GiB of 278.3 GiB) in 38s, read: 1.0 GiB/s, write: 790.7 MiB/s
INFO:  58% (162.8 GiB of 278.3 GiB) in 41s, read: 1.3 GiB/s, write: 644.0 MiB/s
INFO:  59% (166.2 GiB of 278.3 GiB) in 44s, read: 1.1 GiB/s, write: 712.0 MiB/s
INFO:  60% (169.1 GiB of 278.3 GiB) in 48s, read: 739.0 MiB/s, write: 609.0 MiB/s
INFO:  62% (172.7 GiB of 278.3 GiB) in 51s, read: 1.2 GiB/s, write: 632.0 MiB/s
INFO:  63% (176.4 GiB of 278.3 GiB) in 54s, read: 1.2 GiB/s, write: 646.7 MiB/s
INFO:  64% (179.7 GiB of 278.3 GiB) in 57s, read: 1.1 GiB/s, write: 745.3 MiB/s
INFO:  65% (183.4 GiB of 278.3 GiB) in 1m, read: 1.2 GiB/s, write: 632.0 MiB/s
INFO:  67% (186.8 GiB of 278.3 GiB) in 1m 3s, read: 1.1 GiB/s, write: 628.0 MiB/s
INFO:  68% (189.8 GiB of 278.3 GiB) in 1m 6s, read: 1012.0 MiB/s, write: 680.0 MiB/s
INFO:  69% (192.3 GiB of 278.3 GiB) in 1m 9s, read: 850.7 MiB/s, write: 846.7 MiB/s
INFO:  70% (195.7 GiB of 278.3 GiB) in 1m 13s, read: 880.0 MiB/s, write: 880.0 MiB/s
INFO:  71% (198.2 GiB of 278.3 GiB) in 1m 16s, read: 856.0 MiB/s, write: 856.0 MiB/s
INFO:  72% (201.1 GiB of 278.3 GiB) in 1m 19s, read: 976.0 MiB/s, write: 869.3 MiB/s
INFO:  73% (203.8 GiB of 278.3 GiB) in 1m 22s, read: 926.7 MiB/s, write: 926.7 MiB/s
INFO:  74% (206.6 GiB of 278.3 GiB) in 1m 25s, read: 954.7 MiB/s, write: 941.3 MiB/s
INFO:  75% (209.5 GiB of 278.3 GiB) in 1m 28s, read: 1001.3 MiB/s, write: 937.3 MiB/s
INFO:  76% (212.8 GiB of 278.3 GiB) in 1m 31s, read: 1.1 GiB/s, write: 1.1 GiB/s
INFO:  77% (215.1 GiB of 278.3 GiB) in 1m 34s, read: 773.3 MiB/s, write: 694.7 MiB/s
INFO:  78% (218.1 GiB of 278.3 GiB) in 1m 37s, read: 1013.3 MiB/s, write: 613.3 MiB/s
INFO:  79% (222.2 GiB of 278.3 GiB) in 1m 40s, read: 1.4 GiB/s, write: 465.3 MiB/s
INFO:  81% (225.5 GiB of 278.3 GiB) in 1m 43s, read: 1.1 GiB/s, write: 824.0 MiB/s
INFO:  82% (228.3 GiB of 278.3 GiB) in 1m 46s, read: 956.0 MiB/s, write: 956.0 MiB/s
INFO:  83% (231.0 GiB of 278.3 GiB) in 1m 49s, read: 926.7 MiB/s, write: 918.7 MiB/s
INFO:  84% (234.8 GiB of 278.3 GiB) in 1m 52s, read: 1.3 GiB/s, write: 606.7 MiB/s
INFO:  85% (238.6 GiB of 278.3 GiB) in 1m 55s, read: 1.3 GiB/s, write: 588.0 MiB/s
INFO:  87% (242.2 GiB of 278.3 GiB) in 1m 58s, read: 1.2 GiB/s, write: 620.0 MiB/s
INFO:  88% (245.8 GiB of 278.3 GiB) in 2m 1s, read: 1.2 GiB/s, write: 616.0 MiB/s
INFO:  89% (249.5 GiB of 278.3 GiB) in 2m 4s, read: 1.2 GiB/s, write: 554.7 MiB/s
INFO:  91% (253.2 GiB of 278.3 GiB) in 2m 7s, read: 1.3 GiB/s, write: 592.0 MiB/s
INFO:  92% (256.3 GiB of 278.3 GiB) in 2m 10s, read: 1.0 GiB/s, write: 697.3 MiB/s
INFO:  93% (259.6 GiB of 278.3 GiB) in 2m 14s, read: 844.0 MiB/s, write: 840.0 MiB/s
INFO:  94% (262.0 GiB of 278.3 GiB) in 2m 17s, read: 821.3 MiB/s, write: 821.3 MiB/s
INFO:  95% (264.4 GiB of 278.3 GiB) in 2m 20s, read: 828.0 MiB/s, write: 828.0 MiB/s
INFO:  96% (267.7 GiB of 278.3 GiB) in 2m 24s, read: 847.0 MiB/s, write: 847.0 MiB/s
INFO:  97% (270.5 GiB of 278.3 GiB) in 2m 27s, read: 945.3 MiB/s, write: 844.0 MiB/s
INFO:  98% (273.1 GiB of 278.3 GiB) in 2m 30s, read: 893.3 MiB/s, write: 893.3 MiB/s
INFO:  99% (275.8 GiB of 278.3 GiB) in 2m 33s, read: 913.3 MiB/s, write: 913.3 MiB/s
INFO: 100% (278.3 GiB of 278.3 GiB) in 2m 36s, read: 846.7 MiB/s, write: 846.7 MiB/s
INFO: backup is sparse: 138.86 GiB (49%) total zero data
INFO: backup was done incrementally, reused 917.42 GiB (89%)
INFO: transferred 278.29 GiB in 156 seconds (1.8 GiB/s)
INFO: adding notes to backup
INFO: Finished Backup of VM 166 (00:02:36)
INFO: Backup finished at 2024-07-07 01:24:14
INFO: Backup job finished successfully
TASK OK

Dont Concentrate to much on your Iperf Test, i have at least 25GB in LACP on everything Server-Related here, pve/backup-server and everything else. And i cannot even reach the speeds anywhere near, not even with migration.

The PVE-Host itself does all the Backup work, the Backup-Server doesnt do much, it just receives the data.
Check if your Storage where the VM's are Located is fast enough.
If its ZFS, where your VM's are Located on, dont forget that ZVOL's are utterly slow, 20% of the speed of a dataset/zpool on nvme.
So test the disk speed inside a VM.

Cheers

What CPU's are in your PVE hosts?

I am able to get multi GB/s read/write on both my PVE hosts and the PBS datastore. I am able to max the bonded 10G network between them with iperf. I am able to to get over 500MB/s using SCP....

But proxmox backups, run 150-200MB/s.

Changing sync level to none makes no difference.

I am using hardware RAID 10, 1MB Stripe on Dell H740P controllers. I have tried HBA and ZFS which also makes no difference.
 
Would it be possible to try kernel 6.5 on both PVE and PBS at the same time?
If the performance improves, it may be due to a regression introduced between 6.5 and 6.8.

Do you have the latest BIOS and microcode update installed for your hosts?
 
What CPU's are in your PVE hosts?

I am able to get multi GB/s read/write on both my PVE hosts and the PBS datastore. I am able to max the bonded 10G network between them with iperf. I am able to to get over 500MB/s using SCP....

But proxmox backups, run 150-200MB/s.

Changing sync level to none makes no difference.

I am using hardware RAID 10, 1MB Stripe on Dell H740P controllers. I have tried HBA and ZFS which also makes no difference.
Yeah PBS Speeds are very limiting.

About the CPU, you're right, i never throught it makes so much of a Difference.

With some Cheap Silver 4210R CPU's i cannot break 200MB/s either. But i never looked to that servers, as they are only front-servers for Opnsense instances:
Code:
INFO: virtio0: dirty-bitmap status: OK (6.6 GiB of 120.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 6.6 GiB dirty of 120.0 GiB total
INFO:   8% (584.0 MiB of 6.6 GiB) in 3s, read: 194.7 MiB/s, write: 190.7 MiB/s
INFO:  16% (1.1 GiB of 6.6 GiB) in 6s, read: 177.3 MiB/s, write: 157.3 MiB/s
INFO:  24% (1.6 GiB of 6.6 GiB) in 9s, read: 189.3 MiB/s, write: 156.0 MiB/s
INFO:  32% (2.2 GiB of 6.6 GiB) in 12s, read: 174.7 MiB/s, write: 148.0 MiB/s
INFO:  39% (2.6 GiB of 6.6 GiB) in 15s, read: 158.7 MiB/s, write: 149.3 MiB/s
INFO:  47% (3.2 GiB of 6.6 GiB) in 18s, read: 181.3 MiB/s, write: 156.0 MiB/s
INFO:  55% (3.6 GiB of 6.6 GiB) in 21s, read: 169.3 MiB/s, write: 162.7 MiB/s
INFO:  62% (4.1 GiB of 6.6 GiB) in 24s, read: 165.3 MiB/s, write: 162.7 MiB/s
INFO:  70% (4.7 GiB of 6.6 GiB) in 27s, read: 184.0 MiB/s, write: 162.7 MiB/s
INFO:  79% (5.2 GiB of 6.6 GiB) in 30s, read: 190.7 MiB/s, write: 168.0 MiB/s
INFO:  87% (5.8 GiB of 6.6 GiB) in 33s, read: 180.0 MiB/s, write: 157.3 MiB/s
INFO:  99% (6.6 GiB of 6.6 GiB) in 36s, read: 272.0 MiB/s, write: 166.7 MiB/s
INFO: 100% (6.6 GiB of 6.6 GiB) in 37s, read: 52.0 MiB/s, write: 40.0 MiB/s
INFO: backup is sparse: 404.00 MiB (5%) total zero data
INFO: backup was done incrementally, reused 114.29 GiB (95%)
INFO: transferred 6.61 GiB in 37 seconds (182.8 MiB/s)

The Main-Servers (the speeds from previous post) are based on an overclocked Genoa 9374F, there i can't pass 1GB/s with PBS.
For the network, as i mentioned already, there is everything at least 25GB/s in LACP.
 
Last edited:
Yeah PBS Speeds are very limiting.

About the CPU, you're right, i never throught it makes so much of a Difference.

With some Cheap Silver 4210R CPU's i cannot break 200MB/s either. But i never looked to that servers, as they are only front-servers for Opnsense instances:
Code:
INFO: virtio0: dirty-bitmap status: OK (6.6 GiB of 120.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 6.6 GiB dirty of 120.0 GiB total
INFO:   8% (584.0 MiB of 6.6 GiB) in 3s, read: 194.7 MiB/s, write: 190.7 MiB/s
INFO:  16% (1.1 GiB of 6.6 GiB) in 6s, read: 177.3 MiB/s, write: 157.3 MiB/s
INFO:  24% (1.6 GiB of 6.6 GiB) in 9s, read: 189.3 MiB/s, write: 156.0 MiB/s
INFO:  32% (2.2 GiB of 6.6 GiB) in 12s, read: 174.7 MiB/s, write: 148.0 MiB/s
INFO:  39% (2.6 GiB of 6.6 GiB) in 15s, read: 158.7 MiB/s, write: 149.3 MiB/s
INFO:  47% (3.2 GiB of 6.6 GiB) in 18s, read: 181.3 MiB/s, write: 156.0 MiB/s
INFO:  55% (3.6 GiB of 6.6 GiB) in 21s, read: 169.3 MiB/s, write: 162.7 MiB/s
INFO:  62% (4.1 GiB of 6.6 GiB) in 24s, read: 165.3 MiB/s, write: 162.7 MiB/s
INFO:  70% (4.7 GiB of 6.6 GiB) in 27s, read: 184.0 MiB/s, write: 162.7 MiB/s
INFO:  79% (5.2 GiB of 6.6 GiB) in 30s, read: 190.7 MiB/s, write: 168.0 MiB/s
INFO:  87% (5.8 GiB of 6.6 GiB) in 33s, read: 180.0 MiB/s, write: 157.3 MiB/s
INFO:  99% (6.6 GiB of 6.6 GiB) in 36s, read: 272.0 MiB/s, write: 166.7 MiB/s
INFO: 100% (6.6 GiB of 6.6 GiB) in 37s, read: 52.0 MiB/s, write: 40.0 MiB/s
INFO: backup is sparse: 404.00 MiB (5%) total zero data
INFO: backup was done incrementally, reused 114.29 GiB (95%)
INFO: transferred 6.61 GiB in 37 seconds (182.8 MiB/s)

The Main-Servers (the speeds from previous post) are based on an overclocked Genoa 9374F, there i can't pass 1GB/s with PBS.
For the network, as i mentioned already, there is everything at least 25GB/s in LACP.

What CPU's do you have in your servers that are getting close to 1GB/s?
 
Only those 2 Genoa Servers with 9374F. Everywhere else i don't have an PBS, that can keep up. Or don't need an PBS etc...

Some serious newer CPU's than my Xeon Plat 8268's I see. Thats a huge leap in performance.

I have just tested using a host with an Epyc 9354P in it and I get 400MB/s on there.

This smells like a massive single thread performance limitation to me.
 
out of interest - could you try doing a backup of a block device or raw image file using proxmox-backup-client as opposed to via Qemu?

basically the last example from https://pbs.proxmox.com/docs/backup-client.html#creating-backups (you probably want to set --repository and --backup-id as well)

Just as bad, or worse.

Code:
Upload image '/dev/mapper/pve-vm--100--disk--0' to 'root@pam@10.226.10.10:8007:pbs-primary' as tomtest.img.fidx
tomtest.img: had to backup 62.832 GiB of 80 GiB (compressed 42.404 GiB) in 673.35s
tomtest.img: average backup speed: 95.552 MiB/s
tomtest.img: backup was done incrementally, reused 17.168 GiB (21.5%)
Duration: 676.53s
End Time: Tue Jul  9 12:59:22 2024

As I said earlier, testing on an EPYC 9354P platform yields an average of 400MB.s. Which is considerably better than the average 150MB/s I am getting with these Xeon Platinum 8268's. But this is still very slow considering all flash storage on both hosts and PBS, with a 2 x 10G LACP between them.

Am I just going to have to surrender to the fact that Proxmox's backup system is so single threaded/CPU bound that even with this calibre of hardware, this is the best I can expect?

I have attached a fresh set of tests below:

Code:
#### fio on PVE: ####

1M Sequential READ: bw=3534MiB/s (3706MB/s), 3534MiB/s-3534MiB/s (3706MB/s-3706MB/s), io=20.0GiB (21.5GB), run=5795-5795msec
1M Random READ: bw=3407MiB/s (3572MB/s), 3407MiB/s-3407MiB/s (3572MB/s-3572MB/s), io=20.0GiB (21.5GB), run=6012-6012msec
1M Sequential WRITE: bw=2424MiB/s (2541MB/s), 2424MiB/s-2424MiB/s (2541MB/s-2541MB/s), io=20.0GiB (21.5GB), run=8450-8450msec
1M Random WRITE: bw=2509MiB/s (2630MB/s), 2509MiB/s-2509MiB/s (2630MB/s-2630MB/s), io=20.0GiB (21.5GB), run=8164-8164msec

#### fio on PBS: ####
1M Sequential READ: bw=5811MiB/s (6093MB/s), 726MiB/s-728MiB/s (762MB/s-763MB/s), io=160GiB (172GB), run=28140-28196msec
1M Random READ: bw=5610MiB/s (5882MB/s), 701MiB/s-705MiB/s (735MB/s-739MB/s), io=160GiB (172GB), run=29061-29206msec
1M Sequential WRITE: bw=6097MiB/s (6393MB/s), 762MiB/s-762MiB/s (799MB/s-799MB/s), io=160GiB (172GB), run=26871-26872msec
1M Random WRITE: bw=3309MiB/s (3470MB/s), 414MiB/s-418MiB/s (434MB/s-438MB/s), io=160GiB (172GB), run=49001-49515msec

#### iperf between PVE & PBS: ####

1 connection:

root@jupiter:/home# iperf -c 10.226.10.10 -P 1
------------------------------------------------------------
Client connecting to 10.226.10.10, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  1] local 10.226.10.21 port 48480 connected with 10.226.10.10 port 5001 (icwnd/mss/irtt=14/1448/247)
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0040 sec  10.9 GBytes  9.39 Gbits/sec


4 connections:

root@jupiter:/home# iperf -c 10.226.10.10 -P 4
------------------------------------------------------------
Client connecting to 10.226.10.10, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  1] local 10.226.10.21 port 35008 connected with 10.226.10.10 port 5001 (icwnd/mss/irtt=14/1448/267)
[  3] local 10.226.10.21 port 35030 connected with 10.226.10.10 port 5001 (icwnd/mss/irtt=14/1448/177)
[  2] local 10.226.10.21 port 35016 connected with 10.226.10.10 port 5001 (icwnd/mss/irtt=14/1448/81)
[  4] local 10.226.10.21 port 35014 connected with 10.226.10.10 port 5001 (icwnd/mss/irtt=14/1448/218)
[ ID] Interval       Transfer     Bandwidth
[  2] 0.0000-10.0141 sec  3.65 GBytes  3.13 Gbits/sec
[  1] 0.0000-10.0142 sec  3.65 GBytes  3.13 Gbits/sec
[  4] 0.0000-10.0141 sec  3.64 GBytes  3.12 Gbits/sec
[  3] 0.0000-10.0142 sec  10.9 GBytes  9.36 Gbits/sec
[SUM] 0.0000-10.0009 sec  21.9 GBytes  18.8 Gbits/sec


#### Proxmox Backup Client Benchmark on PVE: ####

root@jupiter:/home# proxmox-backup-client benchmark --repository root@pam@10.226.10.10:pbs-primary
Uploaded 568 chunks in 5 seconds.
Time per request: 8820 microseconds.
TLS speed: 475.52 MB/s
SHA256 speed: 336.69 MB/s
Compression speed: 366.67 MB/s
Decompress speed: 585.46 MB/s
AES256/GCM speed: 1178.09 MB/s
Verify speed: 212.33 MB/s

#### Proxmox Backup Client Benchmark on PBS: ####

root@pbs-primary:~# proxmox-backup-client benchmark
SHA256 speed: 453.65 MB/s
Compression speed: 425.09 MB/s
Decompress speed: 600.39 MB/s
AES256/GCM speed: 1196.98 MB/s
Verify speed: 257.29 MB/s

Perhaps the bigger issue here is that I am only able to get ~400MB/s on an EPYC 9354P machine. Its not possible to go much faster...

Code:
#### Testing on an EPYC 9354P: ####

root@1BG:~# proxmox-backup-client benchmark --repository root@pam@10.226.10.10:pbs-primary
Uploaded 574 chunks in 5 seconds.
Time per request: 8752 microseconds.
TLS speed: 479.22 MB/s
SHA256 speed: 1872.98 MB/s
Compression speed: 563.11 MB/s
Decompress speed: 728.89 MB/s
AES256/GCM speed: 1448.78 MB/s
Verify speed: 521.97 MB/s

### Backup job example EPYC 9354P: ###

NFO:   0% (2.2 GiB of 250.0 GiB) in 3s, read: 738.7 MiB/s, write: 382.7 MiB/s
INFO:   1% (3.4 GiB of 250.0 GiB) in 6s, read: 408.0 MiB/s, write: 402.7 MiB/s
INFO:   2% (5.1 GiB of 250.0 GiB) in 10s, read: 439.0 MiB/s, write: 413.0 MiB/s
INFO:   3% (8.7 GiB of 250.0 GiB) in 17s, read: 529.1 MiB/s, write: 388.0 MiB/s
INFO:   4% (11.4 GiB of 250.0 GiB) in 20s, read: 936.0 MiB/s, write: 356.0 MiB/s
INFO:   5% (12.8 GiB of 250.0 GiB) in 23s, read: 469.3 MiB/s, write: 400.0 MiB/s
INFO:   6% (15.1 GiB of 250.0 GiB) in 28s, read: 472.8 MiB/s, write: 378.4 MiB/s
 
Last edited:
thanks for the additional numbers! that does indeed look like there is some severe bottle neck happening that we should get to the bottom of..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!