by chance i found today, that i have some noticeable amount of .tmp files inside my datastores.
it looks , they are a little bit older
i guess they are remains of some interrupted jobs/ backup tasks or maybe crashes and it shouldn't harm to delete them ?
verify is perfectly fine for those DS...
trying to answer this myself - if the cluster filesystem and thus writes to config.db is being stopped before transferring the zfs snapshot, it should not be a problem to bring up the cloned system again, if the original one has not been restarted in between - as this would be the same as if...
i have used the nice tutorial at https://aaronlauterer.com/blog/2021/proxmox-ve-migrate-to-smaller-root-disks/. at several times to migrate proxmox host to smaller zfs disk. nice tutorial !
now i need to do that with a proxmox host being part of a cluster.
do i need to take any additional...
i think fixation is especially needed in the design/architecture, i.e. VM IO needs to be madee independent from the backup-path/speed/availability.
THIS is the REAL issue (and has always been).
but i guess it's not easy to solve by proxmox devs, because qemu upstream is also involved in this...
> Yes, a backup job will run in parallel on each node (one backup per node at a time).
and that may be just too much in larger clusters
https://bugzilla.proxmox.com/show_bug.cgi?id=3086
> I am able to get the same error messages by simply cutting the connection to the PBS for two minutes during backup.
that's very unfortunate. i see that for now there is no other way to do bitmap based backup without slowing down io on VM when pbs is slow, but when connection to pbs is getting...
when there is no load inside the VM, then you are right.
are you using virtio scsi single with iothread enabled (= virtio dataplane) ?
if not, please do.
see https://bugzilla.kernel.org/show_bug.cgi?id=199727#c8
if that happens, then your backupserver or your connection to backupserver may be too slow and your IO inside VM too high during backup window. VM is getting throttled when there is too much IO during backup. please check backup throughput and VM io throughput. please open your own thread for...
>i guess they aren't , because i cannot see that the backup client has knowledge what was the last backup snapshot
apparently the client MUST have knowledge of that.
if i delete the last backup from the pbs, on next backup run the bitmap is being invalidated/cleared and a new full backup being...
do we get corruption this way or not ?
is a subsequent incremental backup run sane ?
i guess they aren't , because i cannot see that the backup client has knowledge what was the last backup snapshot and without that, it assumes the backup server has the last valid snapshot where the VMs block...
if you want go get a clue, why your verify performance is low, do a test like this:
tar cf - ./your-datastore | pv >/dev/null
on my array i get <=50MB/s
> It speeds up everything, as the HDDs aren't any longer hit by a lot a small random IO,
> as the metadata doesn't have to be stored on the HDDs anymore
unfortunately , no. @Dunuin
i have set zfs_arc_meta_balance=50000 in zfs to favour metadata and don't see really noticeable improvement...
ich würd da mal einen supportcall bei AVM aufmachen
achso, und bootet doch bitte auf de proxmox host mal ein anderes OS, z.b. von einer live cd. tritt es damit auch noch auf ? @farrow @Datei
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.