I need to find the more relavent thread on this topic, but the problem I'm facing today, is a specific back takes very long time - and now, it's not bottlenecking on anything other than a single CPU given the nature of the PBS setup/operations over HTTPS on this old hardware.
I know I can run...
one reason to NOT install the PBS on the PVE. Reading that I understood it to be a PBS that is close to the S3 storage. Yes it'll be slow, so my advoce would be to NOT use it for the actual PBS of first backup, but rather a PBS that is synchronizing the already backed up stuff, as a DR/failover...
Consider shutting down all apps, like the SQL server, and retry the sdelete -z
Things like SQL could be the cause of lots of file writes which will "consume" storage if they aren't TRIM/zeroed
What I'd want to do in your situation (having had funs with virtioblock in the past without TRIM support, and having had to do similar):
Attach a virtio-scsi device (enabling SSD & TRIM/etc. on the device) to the windows VM, copy the files of the current windows drives, and swap the drives...
known not to necessarily chose/use the best options for the underlying hardware, just the most common expected optimized values, so you need to know that yourself and check/tune accordingly. Especially newer SSD/NVMes you want ashift=13 (not 12 as in your case, good for 4k drives) else you have...
the command that is actually the one I believe is *needed*:
reason: the writing of ZEROES, not just a TRIM command, as the values (especially the 1.13 compressratio) indicates that there are data in the "empty/deleted" portions of the VM's disk (like when you've done a defrag like above) and...
so no snapshots, which leaves the other possibility, that the disk layout might be a problem for some reason(s) perhaps related to ashift and volume settings.
output of zfs get all tank/vm-200-disk-3 and zpool get all tank ?
This makes it sound like you might have some snapshots still hanging around of that disk/image.
please provide a `zfs list tank -t all | grep -i vm-200-disk-3` output, yes, it'll be long/etc. but would have the info to confirm/deny my suspicions
a) ALWAYS enable compression on the ZFS - at...
yeah, the PCI/bus/etc. routing changes and that does mess up an already installed windows... should be doing it from the get go to install on SCSI virtio else you'll have that fun
hmmm... I wonder if you haven't had a once off event that got "fixed" ??
check/.report on the SMART values for those two HDDs you are complaining is failing...
nope, the setup is like a "stipe" from the moment you've attached the 2nd vdev (that being raidz1-1) and that means that if a vdev fails, ALL the data in the zpool is basically... gone (safe for your backups). In short, the zpool will balance to an extend, the data, so that (on average) both...
replace with CMR drives?
Hendrik's rules of computing:
1) Make a backup
2) Make *ANOTHER* backup
2b) At least one backup *off* provider (ie. a totally different DC/owner/etc.)
3) *CHECK* those backups
Sorry, but it looks like those drives aren't in a good state w.r.t. ZFS.
Anybody arund here doing actual backups of the MinIO storage?
I've contemplated using PVE->PBS backups of the nodes (currently I deploy using LXCs) but I'm concerned w.r.t. the sequential nature of the node backups.
Which then brings me to the rclone mount type backups, ie. mount the bucket...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.