Search results

  1. H

    PBS setup on server with 4 x HD - recommended ZFS setup option

    just note that 4way RAID1 (ie, 4 mirrored copies of the same data) is different from a 4disk RAID10 (ie. a stripe of 2x 2disk mirrors) The 4way RAID1 capable of surviving 3disk failures, while the 4 disk RAID10 is capable of surviving 2xdisk failures, just not the two making up a specific vdev...
  2. H

    Planning new PBS - recomendations are welcome

    your are using 6x units, perhaps consider dRAID, though the "magical" better number is rather 7+ which then gives you that option of a hot sparet o replace the failed unit immediately and do the replacement the next available maintenance slot. I'd go then for a 5disk RAID-Z1 +1 host spare...
  3. H

    ZFS Datastore - Fragmentation with SSD ?

    In the context of SSDs, all writes are "fragmented" by design by the storage controllers on/inside the NVMe/SSDs In the context of ZFS it's the ZFS that was not able to find a contiguous block of size Y and had to split it in smaller chunks (ashift size) and spread that all over the storage...
  4. H

    Cifs mounted Datastore: Do or dont?

    I'd test that portion, as PBS relies on a POSIX with last access time for "touching" chunks to check which chunks can be "expired" Rather setup a 2nd off provider/site with sync jobs pulling the backups for 2nd *copy*
  5. H

    PBS setup on server with 4 x HD - recommended ZFS setup option

    Perhaps the installer GUI enforces that (which also has another issue, like forcing same size disks instead of using the smallest.. but that is a different issue) A RAIDZ3 with 4 disks are like having a 4way mirror, as in both cases you can loose 3xdisks and should function... just the RAIDZ3...
  6. H

    (Mass) Import/converting lxc-tar & qemu-vma into PBS storage?

    Yeah, was (opportunistically) wondering asking, but after a month, as my limited space on the NFS for the vzdumps only really allowed for 3 dumps ( daily, weekly and a monthly) the PBS storage was already more "complete" compared to the vzdumps - just it would've been nice "pre-seed" it with...
  7. H

    InfluxDB metrics for PBS?

    Don't yet see anything on the roadmap, and was wondering if there is a Influxdb metrics feed like for PVE?
  8. H

    [SOLVED] Traffic control method used - sync pull (side) jobs not limited?

    Ah ! Explains, thank you! Missed that :facepalm: Thank you!
  9. H

    [SOLVED] Traffic control method used - sync pull (side) jobs not limited?

    I'm busy syncing a mass number of snapshots to a new PBS, and it seems that the destination (that initiated the sync job) doesn't seem to apply the filter to the inbound traffic, but when I add this to the source, it does seem to apply and limit the outbound traffic. I initially thought it was...
  10. H

    Moving snapshots between local PBS datastores ?

    Thank you. Looking at the 2.1 group filters, the docs aren't clear whether I can have multiples group filter per sync job, ie. I want to sync vm/1, ct/23, vm/456, ct/573 and ct/123456 only, so I guess I have to create 5 sync jobs, each with a specific group filter? (unless I'm into creating a...
  11. H

    Move image

    if you are on ZFS volumes (not .raw), (and you have FSTRIM'd the data inside the VM) then zrep/syncoid is also and option to only sync differences (In theory the whole node's ZFS storage could be cloned that way - just not that on the rpool ;) )
  12. H

    Very slow restoring

    NFS is not the fastest of fast, and I don't guess you are using PBS (ProxMox Backup server) on NFS, or you? if so then you are totally using the wrong setup. If you are using ProxMox with NFS as storage that you did the vzdump to: Well, *my* experience in a big cloud provider's NFS backup...
  13. H

    Proxmox Backup Server Metrics

    Do PBS have equivalent of vzdump hooks somewhere?
  14. H

    Moving snapshots between local PBS datastores ?

    I know this would be technically possible as it's basically a selective sync job, followed by a prune, followed by a GC 24:05 later. The question is if there is a tool or procedure/method implementing it yet, and if not, any pointers? Reason: I'm wanting to move historical backup snaphots to a...
  15. H

    [SOLVED] Q: does Sync jobs able to merge snapshots ?

    Seems this is working as I describe I want to do above, with the added sync job (from both) to a remote PBS
  16. H

    How to setup a remote PBS?

    put in a proxy that will filter the URLs or only allow access from specific remote hosts?
  17. H

    Zpool atime turned off effect on Garbage Collection

    Let me just understand something w.r.t. this: *IF* I turned off atime (ie. atime=off), then my GC "should" delete all chunks older than 24H as no atime would be updated during the GC marking phase, thus my backups older than 24Hours that I wanted to keep, will be corrupted, correct?
  18. H

    Selective snapshot/VMID sync

    I'm wondering if I'm the only person looking for such a feature or not: The requirement is that some critical clients have longer data retention and extra offsite backup requirements, so I was hoping to find a quick fix to setup a remote PBS that will sync only those selective VMID groups from...
  19. H

    Installation Options

    you did check the `.chunks` directory?
  20. H

    [SOLVED] Why zfs as datastore for pbs?

    You mean other than a failed disk: - ZFS proper redundancy: replace disk, datastore "intact" - All other *filesystems*: restore datastore from elsewhere - LVM/mdraid: okay, replace disk, filesystems on top ok but now you need a separate filesystem on top, and you have two sets of...