Search results

  1. H

    Procedure for PBS system disk recovery? (intact datastores)

    Busy to get some documentation/preparations in place, and the question popped up: What is needed to recover a PBS server with intact datastores, but the rpool/root disk(s) failed got corrupted? (case in point a server with a single NVMe or two similar SSDs that fails together or other operator...
  2. H

    Free Space Management

    Remember that PBS makes use (and assumes relatime set on the the filesystem and that noatime is NOT set) of the ATIME to "touch" the used/referenced chunks during the GC cycle, and then, goes and find all the chunks that have a ATIME of more than a day + 5minutes and only gets removed after...
  3. H

    [SOLVED] Blinking Cursor after Upgrade or fresh install & reboot (Hetzner AX51 Server)

    I had a similar "challenge" on an old (circa 2013) SuperMicro with EFI (before UEFI I believe...) where both the PVE 7.1 and the PBS 2.1 installation ISOs (downloaded as at Wednesday 12 Jan22) somehow installed the GRUB too, instead of sticking to EFI... I eventually did a debug installation...
  4. H

    PBS setup on server with 4 x HD - recommended ZFS setup option

    just note that 4way RAID1 (ie, 4 mirrored copies of the same data) is different from a 4disk RAID10 (ie. a stripe of 2x 2disk mirrors) The 4way RAID1 capable of surviving 3disk failures, while the 4 disk RAID10 is capable of surviving 2xdisk failures, just not the two making up a specific vdev...
  5. H

    Planning new PBS - recomendations are welcome

    your are using 6x units, perhaps consider dRAID, though the "magical" better number is rather 7+ which then gives you that option of a hot sparet o replace the failed unit immediately and do the replacement the next available maintenance slot. I'd go then for a 5disk RAID-Z1 +1 host spare...
  6. H

    ZFS Datastore - Fragmentation with SSD ?

    In the context of SSDs, all writes are "fragmented" by design by the storage controllers on/inside the NVMe/SSDs In the context of ZFS it's the ZFS that was not able to find a contiguous block of size Y and had to split it in smaller chunks (ashift size) and spread that all over the storage...
  7. H

    Cifs mounted Datastore: Do or dont?

    I'd test that portion, as PBS relies on a POSIX with last access time for "touching" chunks to check which chunks can be "expired" Rather setup a 2nd off provider/site with sync jobs pulling the backups for 2nd *copy*
  8. H

    PBS setup on server with 4 x HD - recommended ZFS setup option

    Perhaps the installer GUI enforces that (which also has another issue, like forcing same size disks instead of using the smallest.. but that is a different issue) A RAIDZ3 with 4 disks are like having a 4way mirror, as in both cases you can loose 3xdisks and should function... just the RAIDZ3...
  9. H

    (Mass) Import/converting lxc-tar & qemu-vma into PBS storage?

    Yeah, was (opportunistically) wondering asking, but after a month, as my limited space on the NFS for the vzdumps only really allowed for 3 dumps ( daily, weekly and a monthly) the PBS storage was already more "complete" compared to the vzdumps - just it would've been nice "pre-seed" it with...
  10. H

    InfluxDB metrics for PBS?

    Don't yet see anything on the roadmap, and was wondering if there is a Influxdb metrics feed like for PVE?
  11. H

    [SOLVED] Traffic control method used - sync pull (side) jobs not limited?

    Ah ! Explains, thank you! Missed that :facepalm: Thank you!
  12. H

    [SOLVED] Traffic control method used - sync pull (side) jobs not limited?

    I'm busy syncing a mass number of snapshots to a new PBS, and it seems that the destination (that initiated the sync job) doesn't seem to apply the filter to the inbound traffic, but when I add this to the source, it does seem to apply and limit the outbound traffic. I initially thought it was...
  13. H

    Moving snapshots between local PBS datastores ?

    Thank you. Looking at the 2.1 group filters, the docs aren't clear whether I can have multiples group filter per sync job, ie. I want to sync vm/1, ct/23, vm/456, ct/573 and ct/123456 only, so I guess I have to create 5 sync jobs, each with a specific group filter? (unless I'm into creating a...
  14. H

    Move image

    if you are on ZFS volumes (not .raw), (and you have FSTRIM'd the data inside the VM) then zrep/syncoid is also and option to only sync differences (In theory the whole node's ZFS storage could be cloned that way - just not that on the rpool ;) )
  15. H

    Very slow restoring

    NFS is not the fastest of fast, and I don't guess you are using PBS (ProxMox Backup server) on NFS, or you? if so then you are totally using the wrong setup. If you are using ProxMox with NFS as storage that you did the vzdump to: Well, *my* experience in a big cloud provider's NFS backup...
  16. H

    Proxmox Backup Server Metrics

    Do PBS have equivalent of vzdump hooks somewhere?
  17. H

    Moving snapshots between local PBS datastores ?

    I know this would be technically possible as it's basically a selective sync job, followed by a prune, followed by a GC 24:05 later. The question is if there is a tool or procedure/method implementing it yet, and if not, any pointers? Reason: I'm wanting to move historical backup snaphots to a...
  18. H

    [SOLVED] Q: does Sync jobs able to merge snapshots ?

    Seems this is working as I describe I want to do above, with the added sync job (from both) to a remote PBS
  19. H

    How to setup a remote PBS?

    put in a proxy that will filter the URLs or only allow access from specific remote hosts?
  20. H

    Zpool atime turned off effect on Garbage Collection

    Let me just understand something w.r.t. this: *IF* I turned off atime (ie. atime=off), then my GC "should" delete all chunks older than 24H as no atime would be updated during the GC marking phase, thus my backups older than 24Hours that I wanted to keep, will be corrupted, correct?