Search results

  1. VictorSTS

    Increase PVE timeout when listing PBS storage contents

    Hope devs can work on it asap, as it seems to affect quite some use cases and in mine has become a pain point (that could had been avoided if this very PBS had it's special device usage monitored, which wasn't the case :confused:)
  2. VictorSTS

    Increase PVE timeout when listing PBS storage contents

    TLDR; Is it possible to increase the timeout in PVE when listing PBS backups? Seems to timeout at ~25 seconds and there is no way to get the list of backups to do a restore, neither from the storage view nor within the VM it self. Is there a way to really tell ZFS to always keep metadata in...
  3. VictorSTS

    Optimal HDD ZFS configuration for PBS

    Haven't been able to take a deep look at it yet... Unfortunately don't have any full NVMe PBS with similar capacity as my HDD+special device ones, to really compare and get some conclusions.
  4. VictorSTS

    Side effects of disabled PBS account to access datastore on PVE

    You can leave the "last resort" PBS storage disabled in PVE, datacenter, storage so it won't show in PVE nor will be queried. IMHO, that storage should not be configured in PVE unless absolutely necessary: in case of ramson/directed attack/atp an attacker would get knowledge of that PBS and try...
  5. VictorSTS

    Optimal HDD ZFS configuration for PBS

    In one of my PBS I use ZFS RAID10 with 8 HDD + special device and I see a similar speed during restore, without any significant IO load on the drives themselves. I think there's some bottleneck somewhere else. Or better said "too". In my case, doing parallel restores reach like 230MBytes / sec...
  6. VictorSTS

    strange issue-no node visible but VMS all working

    Upper right corner, unfold the menu clicking on the te username and got to My settings, there's a button to reset layout there, which forces the browser to reload the SPA that runs the browser to manage PVE.
  7. VictorSTS

    DRBD in PROXMOX 7.4

    To use replication among cluster nodes you need ZFS [1]. You will need a QDevice to keep quorum if one host fails/is down [2], specially if you use HA to avoid node fencing [3]. DRBD isn't officially supported by Proxmox, although Linbit seems to support it. Also, PVE 7.4 is EOL and you should...
  8. VictorSTS

    Off network PBS best practice?

    PVE backups have to push them to PBS, there's no way PBS can create a "pull backup" of PVE. Your best option is to keep PBS in the same network and use a host firewall in PBS and restrict which devices can reach your PBS host. And of course use proper permissions for the user used in PVE to...
  9. VictorSTS

    CEPH - one pool crashing can bring down other pools and derail whole cluster.

    "Garbage" and "Blasphemous" definitions aside, this is going to be extremely difficult to diagnose or even give some advice without every little detail of the settings for PVE, the VMs and Ceph. And some logs too, there should be some trace about why VM_A got killed.... Even if anyone takes...
  10. VictorSTS

    A New (first) Proxmox Backup Server Setup Questions

    IMHO, makes no sense at all using RAIDz with four drives, use RAID10: you get the same amount of accessible storage and will get up to twice the performance, among other things. Yes, even with the benefit of RAIDz2 supporting losing any two drives without dataloss vs RAID10 losing 2 drives of...
  11. VictorSTS

    How can I attach a Datastore Backup from a USB Hard Drive to Copy to New Datastore?

    In PBS 3.3 there's a checkbox Reuse existing datastore which is meant to do just that: readd an existing datastore in a given path. Previously you could only do it from CLI or editing datastores.cfh file as you did. Could not find a mention in the docs for the new GUI option, though...
  12. VictorSTS

    Restoring a VM broke my Local-LVM

    LVM does not support having two VG's with the same name, which is exactly your issue here. You may try to add filters in lvm.conf in each of the PVE installation excluding the other one so it won't be scanned by the kernel on boot so you have only one "pve" VG active in each.
  13. VictorSTS

    LXC storage other than "local"

    You can only choose "local" as the template source directory. The LXC itself will be stored in any of the other two ZFS storages. Because "backup" type requires a file storage and ZFS is a block storage. You may manually create a zfs filesystem on that zpool, add it as "directory" type of...
  14. VictorSTS

    how to change the interface used for cluster communication

    Check /etc/hosts in both servers and correct the entry for their hosts names using the proper IP for your setup. To apply thee changes, either reboot both hosts or systemctl restart pve-cluster systemctl restart pveproxy
  15. VictorSTS

    Restoring VM with large pool

    Remove the iso from the CD in the VM settings. Get used to leave VM's without an ISO connected to avoid this issue in the future.
  16. VictorSTS

    Restoring VM with large pool

    You can move the disks from the webUI, in the hardware section of the VM, select "Move disk". You can do it with the VM on too, but you will lose thin provisioning in the process (although you can recover it afterwards). Of course you need enough space in rpool to do so... Yes, you can remove...
  17. VictorSTS

    Restoring VM with large pool

    From the web UI you can choose the storage to restore the VM to, but for all the disks. I suggest that you restore the backup to the big_pool and once restored, move the root drive to rpool if you want them to be in different storages.
  18. VictorSTS

    Does a 3 nodes cluster + a Qdevice, allows a single PVE host to continue running VMs?

    Using an intermediate namespace is how I do similar tasks... sometimes, but not always: typically the amount of space used by those "expendable" snapshots is so low that is not worth the hassle and I simply use an adequate mount of "Last" and "Daily" numbers in the prune policy.
  19. VictorSTS

    Does a 3 nodes cluster + a Qdevice, allows a single PVE host to continue running VMs?

    Not sure if I understand this right, but makes no sense to have hourly backups if your prune policy only keeps one daily backup. Would you mind to elaborate with a detailed example?
  20. VictorSTS

    Does a 3 nodes cluster + a Qdevice, allows a single PVE host to continue running VMs?

    Ideally you should have two, independent, prune policies so deletes on the main datastore do not propagate automatically to the other(s). Think about human error, a bug, malware, etc removing backups on the primary datastore.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!