Hello, I have no idea why you are facing this problem but since i assume you restore to ceph storage, and all backups give you the same problem, perhaps try to restore to another storage like a local disk. It's possible that the problem is with ceph/LVM and not the backups. Also, some info about...
It the second ceph cluster is ment for backups only, sure you can do that. How many nodes do you have in the backup cluster? Do you already use PBS in some way?
I am no expert but if you need booth redundancy/no single point of failure and fast incremental backups, you should run pbs AND pve on the other cluster (or one node of it), with physical disks on the pbs, not the cephfs, and nvme if available. This gives you a good way to optimize the pbs...
The host would not be the best column, in a normal cluster the host is not so important, VMs are supposed to be moved around. The cluster name however is a more correct column since VMids are within this realm. VictorSTS has a good point with his idea of UUID and as we are accustomed to that...
You could try with banaction = iptables-allports instead of multiport (if that's in your jail.local) to see if that blocks the ssh, but isuppose you need to check your iptables and in/out chains, i suppose fail2ban utilize iptables for doing the actual blocking
no worries, i think everybody understood that since QCOW2 is the native QEMU-image (out of 9 types). I would like to emphasize that ext4 on qcow2 works very good. However, my eperience is that QCOW2 is a bit slow if the underlaying transport is NFS,
About the test above, i believe you need to...
I am mostly using CEPH so i cannot tell, but for anything that needs disk i/o i prefer RAW before QEMU img, but i cannot tell really.
NFS would work great as you say mounted within the VM, it's a great redundancy to have PBS remote VM with NFS, you can then restore to PVE even if PBS hw is...
Normally the VMs virtual disks would already be redundant by ceph or zfs (or hw raid in your case) on the host meaning zfs does not contribute much, and if underlaying fs is zfs, well nested zfs is hard to recommend. However, if you do passthru of physical disks to your PBS VM, zfs is a great...
That's good. I just don't follow what you want to backup and where. You have two clusters, one running ceph? or both running ceph? Do you backup VMs from both clusters or just one? Do you plan to have a physical pbs node or only using pbs as a vm?
Only way I know is to add a vdev with 4 more disks if you want to keep your 4-disk raidz. Another way is to create a second pool with two 10TB disks mirror not raidz and zfs send the data. Mirrors are sweet when it comes to adding and removing vdevs.
Sorry i meant the backup task log from PVE box, not the PBS log. The size number in the content tab on PBS snapshot list is the size of the backuped disk, not the backup data. Just don't focus on that for the size of backup data. I promise you, the backup data size is low. What do you mean by...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.