Having a swap memory on the same zpool as your root disk can cause such behaviour. If you don't need to use swap, disable it completely when using ZFS. If you need swap, you should create a separate zpool (can be on a free partition of the same disks) and create your swap there.
I'd recommend...
Did you try booting with an older kernel? I can see some messages regarding Broadcom NICs and we discovered a few problems with kernel 6.8 and Broadcom NICs.
You may use a backup hook script for this. You can find an example on your Proxmox VE Server at
/usr/share/doc/pve-manager/examples/vzdump-hook-script.pl to get you started. The event you're looking for is probably the job-end.
Copy the script to e.g. /usr/local/bin/ and change it as you like...
please post the output of od --format=u1 --read-bytes=8 /mnt/backup-cephfs/ns/backup/vm/440/2024-06-19T10\:32\:19Z/drive-virtio0.img.fidx
The output should match exactly 0000000 47 127 65 237 145 253 15 205
if not (and it seems like) your fixed index is currupted or something else than a...
I'd recommend using a ZFS RAID10 pool on top of your HBA, not a mdadm RAID configuration. Besides mdadm isn't supported by Proxmox, mdadm has drawbacks when it comes to ops/sec and other issues. It may still be a valid approach for a file server, but it's not the best thing for a PBS or any...
It seems an earlier fleece block device still exists from an earlier backup. If the problem still persists, you may manually remove the vm-103-fleece-0 rbd (rbd rm <poolname>/<rbdname>).
As your TrueNAS machine also has 2x1G links, the Proxmox nodes could probably use different, but also probably use the same physical link of the two TrueNAS NICs.
If you just want to check if it works in general, try testing with iperf3 (if available on TrueNAS, not sure). Set your LACP to...
You may try adding kvm options of your choice by using qm set <vmid> args=<your-args-for-qemu/kvm>.
Your options will be appended to the commandline that is used to start up the kvm vm.
The Proxmox node is your only NFS client. That means your connection is *always* between the same pair of source mac, source ip, target mac and target ip (layer 2 + 3), which results in always having the same hash and always using the same single physical link of your bond.
LACP is able to use...
If you set the sync-level to none, you might end up with broken backups in case of a power loss. It should only affect newly written chunks, so there should be nearly no risk for existing/older backups.
It seems your backup storage is too slow, which is hard to believe with 30 disks in a raid10.
Ja, das sollte so reichen. Bitte unbedingt darauf achten, dass es Enterprise SSDs sind.
Bei der Installation des PBS kannst du auswählen, auf welche Platten das System geschrieben wird - ob auf alle oder nur den ersten Mirror oder die SSDs. Unter Optionen muss die Größe - egal bei welchen...
AFAIK, the `netmtu` setting has no effect in Corosync 3 using knet as the transport (Which is default since Corosync 3 / Proxmox VE 6). Can't tell why you're getting messages since v8.
Would you mind posting your corosync.conf? Did you try removing the netmtu setting from the config?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.