Hope devs can work on it asap, as it seems to affect quite some use cases and in mine has become a pain point (that could had been avoided if this very PBS had it's special device usage monitored, which wasn't the case :confused:)
TLDR;
Is it possible to increase the timeout in PVE when listing PBS backups? Seems to timeout at ~25 seconds and there is no way to get the list of backups to do a restore, neither from the storage view nor within the VM it self.
Is there a way to really tell ZFS to always keep metadata in...
Haven't been able to take a deep look at it yet... Unfortunately don't have any full NVMe PBS with similar capacity as my HDD+special device ones, to really compare and get some conclusions.
You can leave the "last resort" PBS storage disabled in PVE, datacenter, storage so it won't show in PVE nor will be queried.
IMHO, that storage should not be configured in PVE unless absolutely necessary: in case of ramson/directed attack/atp an attacker would get knowledge of that PBS and try...
In one of my PBS I use ZFS RAID10 with 8 HDD + special device and I see a similar speed during restore, without any significant IO load on the drives themselves. I think there's some bottleneck somewhere else. Or better said "too". In my case, doing parallel restores reach like 230MBytes / sec...
Upper right corner, unfold the menu clicking on the te username and got to My settings, there's a button to reset layout there, which forces the browser to reload the SPA that runs the browser to manage PVE.
To use replication among cluster nodes you need ZFS [1]. You will need a QDevice to keep quorum if one host fails/is down [2], specially if you use HA to avoid node fencing [3].
DRBD isn't officially supported by Proxmox, although Linbit seems to support it. Also, PVE 7.4 is EOL and you should...
PVE backups have to push them to PBS, there's no way PBS can create a "pull backup" of PVE. Your best option is to keep PBS in the same network and use a host firewall in PBS and restrict which devices can reach your PBS host. And of course use proper permissions for the user used in PVE to...
"Garbage" and "Blasphemous" definitions aside, this is going to be extremely difficult to diagnose or even give some advice without every little detail of the settings for PVE, the VMs and Ceph. And some logs too, there should be some trace about why VM_A got killed.... Even if anyone takes...
IMHO, makes no sense at all using RAIDz with four drives, use RAID10: you get the same amount of accessible storage and will get up to twice the performance, among other things. Yes, even with the benefit of RAIDz2 supporting losing any two drives without dataloss vs RAID10 losing 2 drives of...
In PBS 3.3 there's a checkbox Reuse existing datastore which is meant to do just that: readd an existing datastore in a given path. Previously you could only do it from CLI or editing datastores.cfh file as you did.
Could not find a mention in the docs for the new GUI option, though...
LVM does not support having two VG's with the same name, which is exactly your issue here. You may try to add filters in lvm.conf in each of the PVE installation excluding the other one so it won't be scanned by the kernel on boot so you have only one "pve" VG active in each.
You can only choose "local" as the template source directory. The LXC itself will be stored in any of the other two ZFS storages.
Because "backup" type requires a file storage and ZFS is a block storage. You may manually create a zfs filesystem on that zpool, add it as "directory" type of...
Check /etc/hosts in both servers and correct the entry for their hosts names using the proper IP for your setup. To apply thee changes, either reboot both hosts or
systemctl restart pve-cluster
systemctl restart pveproxy
You can move the disks from the webUI, in the hardware section of the VM, select "Move disk". You can do it with the VM on too, but you will lose thin provisioning in the process (although you can recover it afterwards).
Of course you need enough space in rpool to do so... Yes, you can remove...
From the web UI you can choose the storage to restore the VM to, but for all the disks. I suggest that you restore the backup to the big_pool and once restored, move the root drive to rpool if you want them to be in different storages.
Using an intermediate namespace is how I do similar tasks... sometimes, but not always: typically the amount of space used by those "expendable" snapshots is so low that is not worth the hassle and I simply use an adequate mount of "Last" and "Daily" numbers in the prune policy.
Not sure if I understand this right, but makes no sense to have hourly backups if your prune policy only keeps one daily backup. Would you mind to elaborate with a detailed example?
Ideally you should have two, independent, prune policies so deletes on the main datastore do not propagate automatically to the other(s). Think about human error, a bug, malware, etc removing backups on the primary datastore.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.