Recent content by RolandK

  1. RolandK

    verify after sync/replication ?

    ah, thanks for the pointer, i searched bugzilla but did miss this ticket. but for my curiousity - when newly pulled data chunks are already verified - why do they appear unverified at the sync target when they had not been verified at the source before ? shouldn't set pbs them as verified then ?
  2. RolandK

    verify after sync/replication ?

    hi, as it seems syncing a datastore to another pbs will "inherit" the verify state of the source snapshot(s) is there a way to unset that verified flag on/after transfer so verify can be run AFTER transfer/sync, so we can verify freshly transferred snapshots and and can use re-verify-after-30d...
  3. RolandK

    single unencrypted file in encrypted repo - where does it belong to ?

    ah, ok, that makes sense. thanks for explaining. i removed the atime safety check mark and on the next GC the file got removed.
  4. RolandK

    single unencrypted file in encrypted repo - where does it belong to ?

    i have made some weird observation, i cannot explain to myself i switched my repos to encrypted a while ago and all unencrypted backups have been purged and removed by garbage collection. in webui all backup snapshots show up encrypted. for being sure that there is everything encrypted, i...
  5. RolandK

    Recent PVE 9 kernel update - OOPSfest in mm/hugetlb

    >i thought that's what i was doing here, reporting it, tbh yes, thanks, but for a bug to be reproduced (which helps a lot resolving it), a "receipe" (i.e. detailed description of setup and what to do) is needed. so if you can provide such, i'm sure it's very welcomed.
  6. RolandK

    Recent PVE 9 kernel update - OOPSfest in mm/hugetlb

    there is a bugticket on something hugepage related at https://bugzilla.proxmox.com/show_bug.cgi?id=7052 i you have a reproducer for a hugepage issue, i'm sure it would be worth reporting it
  7. RolandK

    Proxmox Backup Server 4.1 released!

    ok. thanks for making it clear
  8. RolandK

    Proxmox Backup Server 4.1 released!

    not sure if this is a result of the upgrade and how it looked before - but is this normal/intentional that size and encryption staatus for client.log.blob is not shown and that both blob files are not downloadable ? when unencrypted, they can be downloaded
  9. RolandK

    Poor GC/Verify performance on PBS

    you may try new verification reads and workers options, it considerably improves verification speed, from a first test i see 2-3x better performance - which is equivalent to the findings reported at https://bugzilla.proxmox.com/show_bug.cgi?id=5035#c2
  10. RolandK

    Improve verification process - change CPU

    you may try new verification reads and workers options, it considerably improves verification speed, from a first test i see 2-3x better performance - which is equivalent to the findings reported at https://bugzilla.proxmox.com/show_bug.cgi?id=5035#c2
  11. RolandK

    MDRAID & O_DIRECT

    yes, raid won't make backup obsolete. but from my admin perspective, it would be logical consequence to regularly check/scrub 100% of your system disks and not only 99% (i.e. 100% minus boot-part minus efi-part) , even if it would be possible to recreate. you could have that included into your...
  12. RolandK

    MDRAID & O_DIRECT

    >afaik, i/o errors typically happen on read, not on write. addon note for this: https://www.enterprisestorageforum.com/hardware/drive-reliability-studies/ "The authors found that final read errors (read errors after multiple retries) are about two orders of magnitude more frequent in terms of...
  13. RolandK

    MDRAID & O_DIRECT

    yes, you are right, chances ar low, but i'm not not sure if we should really call it "over engineered" to check boot env for disk issues and to have "zfs|btrfs scrub" equivalent for bootenv. for all those who worry, here are some ideas how check could be done proactively: 1. patrol read of all...
  14. RolandK

    MDRAID & O_DIRECT

    yes, but while zfs is getting regular scrub, silent bitrot could indeed happen on partition1+2 on system which is rarely getting touched/updated. such issue will hit you when you don't expect it. chances are low indeed, but the sectors of partition1+2 don't get regular check and this is...