Recent content by molnart

  1. M

    HBA - PCIe pass-through HP380g9 w/ P840 card for Xpenology "works" but pegs hosts processor.

    over at reddit some guy managed to get it work.... by throwing out the P840 and replacing it with a LSI 9300. so far that is the closest thing to a solution i've seen and I am planning to do it once I decide if i really need that HP machine in my homelab
  2. M

    ESTALE: Stale file handle

    looks like my problems are caused by high wait io. PBS tries to update the access time on the files, but it times out due to high wait IO. i did some tweaks in the ZFS caching and allocated some more ram to ZFS host, so my wait io has decreased and garbage collection can finish. still takes 4-7...
  3. M

    ESTALE: Stale file handle

    so according to my current investigation on the high wait ios: - currently are caused by storj storage node. turning the storagenode off make wait io drop immediately form 40 to 2-3 - the wait io increase in april (when i was not running zfs or storagenode) was caused by moving the datastore...
  4. M

    ESTALE: Stale file handle

    something's definitely wrong here... i have created a new datastore. just creating it took 42 minutes. then i have started a garbage collection job on the newly created empty datastore and its running for an hour already. the wait IO on the NFS server is constantly high for months, but I dont...
  5. M

    ESTALE: Stale file handle

    my ZFS config is posted here https://forum.proxmox.com/threads/estale-stale-file-handle.120000/post-696875 both options are enabled. also i really dont know why does it take so long.... in increased in april from 10 minutes to 9 hours between two runs. back then the datastore was on a single...
  6. M

    ESTALE: Stale file handle

    i did try to unmount and remount several times and also i think during reboot basically the same happens. my case is a bit different than @cpulove 's, i understood he was unable to add the datastores and make any backups. for me backups work fine (there is 20+ of them running each day), its the...
  7. M

    ESTALE: Stale file handle

    the pool is alright, scrubbed regularly, coincidentally the last scrub just finished today. both the NFS server as PBS have been rebooted before running the garbage collection (PBS turned off, NFS server rebooted, PBS started). what I am trying to figure out what it the operation PBS is trying...
  8. M

    ESTALE: Stale file handle

    this is driving me nuts, my pbs storage is slowly growing and I did not have a successful garbage collection for years. i find it mildly interesting, that its always a different file that is failing. restarting the pbs server does not seem to help also I have checked the file that came back...
  9. M

    Problem: Disk read/write request responses are too high (read > 20 ms for 15m or write > 20 ms for 15m)

    what is the best practice for syncing long-term backups to rotating disk? garbage collection on a z-raid pool via NFS takes 10+ hours and often fails. i could use an ssd as the primary backup target for the most recent copies, but still i need to sync them to a larger capacity storage and...
  10. M

    ESTALE: Stale file handle

    i have a similar problem. been using PBS on a remote NFS share for years without problems, but recently I have changed my storage setup, so the backup target now is a ZFS dataset, still mounted as NFS. Since moving to NFS i am getting these errors during Garbage Collection: TASK ERROR: update...
  11. M

    HBA - PCIe pass-through HP380g9 w/ P840 card for Xpenology "works" but pegs hosts processor.

    i have the same (or similar issue). passing trough the P840 with ROM-BAR enabled just gives me an endless Configuring controller... message. Disabling ROM-BAR seems to help, but after a while iLO throws a critical error on the controller and fans spin up to 100%. my goal was to use ZFS on the...
  12. M

    Issues restoring PVE via Clonezilla

    surely i did. i am running proxmox for ~5 years and I have migrated disks at least 4 times and it always worked. i never used anything else than thin-lvm as the machine never supported more than 1 drive.
  13. M

    Issues restoring PVE via Clonezilla

    I want to move my PVE host to a larger disk. I did this process in the past without any problems (actually I have even restored to a smaller drive with some trial and error) but now it fails. - I have created a Clonezilla full disk backup on my NAS - when trying to restore the Clonezilla image I...
  14. M

    Trying to back up VM to a share hosted by it

    I have just installed PBS and trying to figure out how it works. Mostly its ok, but there is one thing i'd like to solve: my PBS datastore used a NFS share mounted via fstab. The share is provided by a VM and the actual data is stored on disks on a passed-trough RAID controller. When I try to...