Search results

  1. VictorSTS

    [SOLVED] ONE OPTION: the chronic permissions flail on mounted drives / mount points and 97 threads and options

    IMHO all are misuses of an unprivileged LXC: it exists to isolate the LXC from the host as much a possible and that's why it is hard to "un-isolate" the LXC from the host. As simple as that. In this very case of a PBS, an unprivileged LXC adds nothing but headaches (but you already noticed...
  2. VictorSTS

    [SOLVED] ONE OPTION: the chronic permissions flail on mounted drives / mount points and 97 threads and options

    Try to get PBS in a completely different machine to ease the recovery when the PVE host(s) fail, at least the one on the "main" site. That would also completely workaround the permissions issue. Remember that you can also use a Virtual Machine and use USB or full disk pass through to get the VM...
  3. VictorSTS

    HA with different zfs pools

    It's impossible to give accurate instructions/recommendations without accurate questions/configurations. Please, post the exact config of the hosts regarding disk/storage so we can help you out instead of pulling a crystal ball to guess your settings :)
  4. VictorSTS

    HA with different zfs pools

    IIUC, you have one host with some big pool were you host your fileserver VM. Let's call this pool "filepool" and shows as storage "filepool" in PVE, restricted to the one host that runs the fileserver VM. You would just need to add disk(s) to another host, configure a ZFS with pool name...
  5. VictorSTS

    [SOLVED] ONE OPTION: the chronic permissions flail on mounted drives / mount points and 97 threads and options

    IMHO, using an unprivileged LXC for PBS doesn't make sense. The isolation that provides an unprivileged container is of no use in this use case. I mean, I don't expect PBS processes to misbehave or become malware that may try to get out the LXC via the host's kernel and try to break havoc in the...
  6. VictorSTS

    [SOLVED] High io delay after loosing a node

    This effectively renders the cluster useless: once you lose any OSD, there will be no I/O on the PGs stored in that OSD until they get recovered from the single copy still in the cluster. You should always use at least size=3, min_size=2 unless you can tolerate such downtime. Not to mention the...
  7. VictorSTS

    Cuota NameSpace

    AFAIK, there's no way to do that in a sensible way. The main reason for it is that all namespaces in the same datastore share the same set of chunks. That means that data is deduplicated among all namespaces of a datastore. If there was a way to set quotas to a namespace, which one would you...
  8. VictorSTS

    [Suggestion] Rights granted for accessing "Datacenter >> Backup" are too high

    Is this expected to be released with PVE9? Need to implement a solution for a use case were backup selection using tag would fit perfectly. Thanks!
  9. VictorSTS

    Prune Job: Keep first backup of the day instead of last?

    If you install qemu-guest-agent on the VM the backup should be as consistent as with stop mode, unless you have some strange application there. Using two namespaces will require PVE to use two PBS storage, thus losing dirty bitmap when doing backups to the other, although the stop mode backup...
  10. VictorSTS

    ZFS 2.3.0 has been released, how long until its available?

    Given what's mentioned a few posts above [1], it could be released with PVE9 that will be based on Debian Trixie. Given that Trixie has not reached full freeze yet [2] but expecting that may happen soon, we could expect a PVE9 release around Q3 this year (it think I've seen some info posted by...
  11. VictorSTS

    Redundant storage connectivity options with 2 network switches

    If you just want NFS, then maybe for some use cases multipath is better. If you want to cover other use usages, like normal VM traffic, were multipath doesn't apply/exists, bonding is an easy way to aggregate links.
  12. VictorSTS

    Redundant storage connectivity options with 2 network switches

    Create a bonding at the PVE level using two physical nics. Then add a bridge over that bond. Your host will deal with load balancing and redundancy of the network connection. Which bonding mode to use depends on the switches and their features (i.e. you need stacking or MLAG to use LACP 802.3ad...
  13. VictorSTS

    Costant CPU usage

    Unrelated to the CPU usage, which is indeed caused by proxmox-backup-api serving PVE storage status requests plus your little CPU, just a heads up: using RAID6 on BTRFS isn't a good idea as it is not stable [1] and even a badly timed power outage can corrupt metadata and make you lose data. [1]...
  14. VictorSTS

    iothread-vq-mapping support

    That same post, at the third comment, has a link to Proxmox bugzilla were they are discussing the matter... [1] [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6350
  15. VictorSTS

    iothread-vq-mapping support

    AFAIK it's in the works, as a forum search shows [1] [1] https://forum.proxmox.com/threads/feature-request-proxmox-9-0-iothread-vq-mapping.166919/
  16. VictorSTS

    Many Errors on Proxmox Hypervisor

    If you want to hide such messages, disable PCIe Advanced Error Reporting (PCIe AER) in your BIOS. Whatever hardware causes them will still cause them, but you won't see them in your logs. The downside is that uncorrectable errors, the bad ones, won't show up in your logs either... If you really...
  17. VictorSTS

    ProLiant DL360 Gen11 sas

    That makes no sense: 2 drives are the perfect RAID1 setup to install any OS. That would be the first mobo/controller in history that doesn't allow a RAID1 with two drives :)
  18. VictorSTS

    Shut down a VM when a different hosts shuts down

    This doesn't feel logical IMHO. If you will end up starting the VM again in any of your surviving hosts, it will eventually use the same amount of memory it had before the shutdown, risking OOM killer on the host it is running in. Maybe a simpler option could be to use memory ballooning for that...
  19. VictorSTS

    Ceph does not recover on second node failure after 10 minutes

    TLDR mon_osd_min_in_ratio is your friend [1] Long story By default it is 0.75, meaning that Ceph will not mark out a down OSD if there is already ~25% of OSD already marked out. That is, a minimum of 75% of OSD will remain in even if they are down hence no recovery will happen. In your...
  20. VictorSTS

    Disable fs-freeze on snapshot backups

    Just a note: that option is exposed in the webUI at least since March 2023 with the release of PVE7.4 (check release notes [1]): [1] https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.4