Recent content by tomstephens89

  1. T

    PBS Performance improvements (enterprise all-flash)

    Just as bad, or worse. Upload image '/dev/mapper/pve-vm--100--disk--0' to 'root@pam@10.226.10.10:8007:pbs-primary' as tomtest.img.fidx tomtest.img: had to backup 62.832 GiB of 80 GiB (compressed 42.404 GiB) in 673.35s tomtest.img: average backup speed: 95.552 MiB/s tomtest.img: backup was done...
  2. T

    PBS Performance improvements (enterprise all-flash)

    Some serious newer CPU's than my Xeon Plat 8268's I see. Thats a huge leap in performance. I have just tested using a host with an Epyc 9354P in it and I get 400MB/s on there. This smells like a massive single thread performance limitation to me.
  3. T

    PBS Performance improvements (enterprise all-flash)

    What CPU's do you have in your servers that are getting close to 1GB/s?
  4. T

    PBS Performance improvements (enterprise all-flash)

    What CPU's are in your PVE hosts? I am able to get multi GB/s read/write on both my PVE hosts and the PBS datastore. I am able to max the bonded 10G network between them with iperf. I am able to to get over 500MB/s using SCP.... But proxmox backups, run 150-200MB/s. Changing sync level to...
  5. T

    PBS Performance improvements (enterprise all-flash)

    Have just set recordsize to 1M on my main datastore. Lucky we rebuilt the PBS yesterday to test ZFS as everything on here so far has been with hardware RAID 10. However, even after setting this record size, there is no difference in backup performance. I can't get past 200MB/s. See iperf is...
  6. T

    PBS Performance improvements (enterprise all-flash)

    We have rebuilt the PBS a couple of times during testing and also used more than one node for testing. The PVE's are all the same. To clarify, Jupiter is now running standalone as a PVE host. pbs-primary is the SSD backed PBS as above. Mercury is one of our production cluster nodes. Network on...
  7. T

    PBS Performance improvements (enterprise all-flash)

    Latest builds all round, the PBS was installed 2 days ago and the PVE hosts a week. Yes I am aware of the sparse backups but when its got actual data to copy, it maxes out around 200MB/s. Frequently less. Benchmarks on the PBS storage with fio are even better. Its a 24 SSD RAID 10 array...
  8. T

    Improve verification process - change CPU

    Well thats good to hear, I will keep a close eye.
  9. T

    Why Restore is so slow from PBS and what is slow/fast ?

    I have testing using my secondary PBS which is all spinning disks, vs my primary which is all SSD.... Performance is very close, almost the same. There is a MASSIVE CPU limitation here.
  10. T

    Improve verification process - change CPU

    Interesting but this only looks to apply to PBS at the datastore level for verify 'jobs'. Rather than the chunk verification checksumming process which happens at backup on the host.
  11. T

    Improve verification process - change CPU

    I am also suffering poor backup performance but am running powerful, enterprise hardware with all flash storage in hardware RAID 10... I beleive the 'chunk verification' is the bottleneck. But these are modern Xeon Platinum's in my hosts...
  12. T

    Why Restore is so slow from PBS and what is slow/fast ?

    Resurrecting this thread as I have the same problem. Performance of backup and restore is terrible, even with 10G network and all flash source/destination servers. See here: https://forum.proxmox.com/threads/pbs-performance-improvements-enterprise-all-flash.150514/ I am also running a...
  13. T

    PBS Performance improvements (enterprise all-flash)

    Further testing reveals no disk bottleneck at source, no disk bottleneck at PBS, iperf able to saturate network, SCP able to exceed 500MB/s.... Yet backups run an average of 200MB/s and restores are much much slower. Anyone care to shine any light on this? My hardware and network is good but...
  14. T

    [SOLVED] How to recreate symlink

    Ok then, shouldn't have an issue rejoining then as we havn't manually pinned any keys. To remove the dead node, we simply ran PVECM delnode XXXX and then removed the nodes folder from the cluster filesystem.