Hello.
I would like to switch from 2 slow instances of PBS to an efficient one (described below). I'd like to have your feedback on the consistency of the hardware with my usage.
Here is my actual situation:
Here is what i want to buy to unify my PBS instance and finally be able to have a fluid GUI listing of my thousands of functional backups and the possibility of carrying out regular integrity checks without it taking days/week or simply broke the system.
Intel Xeon-Gold 6434 3.7GHz 8-core 195W (mono socket)
256Gb RAM ECC
PBS Storage: 5 x 3.2TB NVMe Gen4 High Performance Mixed Use
PBS System: NS204i-u ( NVMe HotPlug Boot Optimized StorageDevice)
2x10Gbps LACP for network.
Question is:
Does this configuration seem harmonious to you?
Is it better to use integrated PBS ZFS management (Creating a RAIDZ1 ou RAIDZ2 pool via GUI using the 5 Nvme)
Should we prioritize CPU frequency or cores?
I've read everything and its opposite on the Internet. I'm not planning to add any SpecialDevice, Zil or L2Arc to this ZFS NVME pool. What do you think? Are there any important parameters to specify before making the ZFS pool? Also, if you see another method more suited to PBS for storage, I'm interested.
Last but not least: If it's best to manage disks in ZFS using PBS, then I'm forced to install PBS directly on the hardware. It's impossible to virtualize it while maintaining the same performance, isn't it?
I thank you for reading this long thread and hope to benefit from your expert comments
Adrien
I would like to switch from 2 slow instances of PBS to an efficient one (described below). I'd like to have your feedback on the consistency of the hardware with my usage.
Here is my actual situation:
On the first instance, I currently centralize the backup of 6 PVE clusters located on remote sites to a central site (Total ~80Vm LXC/KVM ).
I back up all VMs (KVM and LXC) 4 times a day, with fairly long retention periods (360 hourly, 15 weekly, 6 monthly). In total, this represents 4.5TB.
The volume is mounted by NFS on a ZFS NAS (truenas).
The global "Prune Job" task takes ~20min, the "Garbage" task takes about 12h, and I no longer perform integrity checks because the VM is totally saturated due to diskIOs. I also have IO Delay and Load Average values that are too high (probably due to the slow filesystem), as I have the same problem with 48 x Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz (2 Sockets).
On the second PBS instance I'm backing up the main cluster of my infrastructure: 7 nodes for ~150Vm LXC/KVM. I've made a second instance to separate the server load and above all to use another ZFS NAS while waiting for an investment. The issues are quite similar (Slow IO, High CPU, GUI list of backups very very slow...)
Here is what i want to buy to unify my PBS instance and finally be able to have a fluid GUI listing of my thousands of functional backups and the possibility of carrying out regular integrity checks without it taking days/week or simply broke the system.
Intel Xeon-Gold 6434 3.7GHz 8-core 195W (mono socket)
256Gb RAM ECC
PBS Storage: 5 x 3.2TB NVMe Gen4 High Performance Mixed Use
PBS System: NS204i-u ( NVMe HotPlug Boot Optimized StorageDevice)
2x10Gbps LACP for network.
Question is:
Does this configuration seem harmonious to you?
Is it better to use integrated PBS ZFS management (Creating a RAIDZ1 ou RAIDZ2 pool via GUI using the 5 Nvme)
Should we prioritize CPU frequency or cores?
I've read everything and its opposite on the Internet. I'm not planning to add any SpecialDevice, Zil or L2Arc to this ZFS NVME pool. What do you think? Are there any important parameters to specify before making the ZFS pool? Also, if you see another method more suited to PBS for storage, I'm interested.
Last but not least: If it's best to manage disks in ZFS using PBS, then I'm forced to install PBS directly on the hardware. It's impossible to virtualize it while maintaining the same performance, isn't it?
I thank you for reading this long thread and hope to benefit from your expert comments
Adrien