Search results

  1. rholighaus

    [SOLVED] - ZFS disk usage too high?

    try to use zfs list -ospace for more details.
  2. rholighaus

    Oh no! "Error: checksum verification failed. Entering rescue mode ..."

    Hi jnannow, check the thread in the article below, in particular tge comments. I have used the procedure to move /boot to both USB sticks and SD cards on various PVE servers and they run very well, both with PVE 5.x and 6.x...
  3. rholighaus

    znapzend backup generator script

    I have znapzend running on pve 5.2 but see no reason why it shouldn’t run on pve 6 as it’s using standard ZFS features and commands.
  4. rholighaus

    ZFS replication via migration network - sporadic problems finding target IP address

    Hi Fabian, I seen to get regular temporary failures at the one node: Sep 27 05:51:39 carrier-3 pmxcfs[3697]: [status] notice: received log Sep 27 05:51:53 carrier-3 pmxcfs[3697]: [status] notice: received log Sep 27 05:52:45 carrier-3 corosync[3887]: error [TOTEM ] FAILED TO RECEIVE Sep 27...
  5. rholighaus

    ZFS replication via migration network - sporadic problems finding target IP address

    Hi Fabian, thanks for that tip. I think I found the culprit: Thu Sep 26 11:22:08 CEST 2019 ip: '10.1.2.4' Thu Sep 26 11:22:39 CEST 2019 no quorum Thu Sep 26 11:23:10 CEST 2019 ip: '10.1.2.4' Here's my config: root@carrier-3:/var/log# pveversion --verbose proxmox-ve: 5.4-2 (running...
  6. rholighaus

    ZFS replication via migration network - sporadic problems finding target IP address

    Thank you Fabian! Is there any way to make PVE log these actions so I can start debugging? I had 5 incidents since midnight. Is there a timeout that this resolving could be running into? The only affected target node is a (slow) HP Microserver with 4 spinning WD Red disks...
  7. rholighaus

    ZFS replication via migration network - sporadic problems finding target IP address

    We have a cluster of 2 production nodes and 2 PVE nodes that are only working as ZFS replication destinations. As recommended, we have recently created a separate migration network and entered it in /etc/pve/datacenter.cfg: # use dedicated migration network migration: secure,network=10.1.2.0/24...
  8. rholighaus

    Proxmox VE 6.0 released!

    Hi Fabian, Maybe I don't understand this fully but the servers we run are not UEFI capable. Is there an alternative to GRUB in that case?
  9. rholighaus

    Proxmox VE 6.0 released!

    Hi Fabian, Thank you for your response. Unfortunately, all of our servers are HP ProLiant DL360 Gen8 or HP Microserver Gen8 and do not support UEFI boot. I helped myself by installing a separate small boot SSD on the system that wouldn't boot anymore but I am concerned about upgrading one of...
  10. rholighaus

    Proxmox VE 6.0 released!

    Just upgraded my server running 5.4, ZFS and a four disk setup with two striped mirrors, booting from the first mirror. Hardware is a HP Microserver Gen8 with disk controller in AHCI SATA mode. So for, every 5.4 kernel update had worked and the system would be able to reboot. Now, after...
  11. rholighaus

    Feature request: VM migration (restart mode)

    Done: https://bugzilla.proxmox.com/show_bug.cgi?id=2259
  12. rholighaus

    Feature request: VM migration (restart mode)

    In a replicated ZFS environment, containers can be easily migrated from one node to another, as the only supported migration mode of a running container is restart mode. It works quick and reliably if replication happends e.g. hourly or more often. For VMs, the system does not allow to select...
  13. rholighaus

    Running hooks when incoming migration is finished

    I now tried to check for the lock: migrate in the configuration but it looks it is only there AFTER the post-stop hook has been executed and is already removed BEFORE the pre-start hook is executed: On migration source node: /var/lib/vz/snippets/pve-hook called with pve_number=119 and...
  14. rholighaus

    Running hooks when incoming migration is finished

    Thanks - found that last night after some googling :-)
  15. rholighaus

    Running hooks when incoming migration is finished

    Could you kindly provide us with documentation how to configure e.g. a post-start hook for a vm? I cannot find anything in the documentation apart from the announcement that this is now supported since 5.4 Thank you!
  16. rholighaus

    Running hooks when incoming migration is finished

    It looks like pre-stop is not supported for LXC at this moment. post-stop doesn't seem to see the migration lock in the config, it's already gone. Neither does a pre-start on the machine where it migrated to.
  17. rholighaus

    Running hooks when incoming migration is finished

    Hi Dominik, can you give me a hint how to detect migrate lock on LXC in such case?
  18. rholighaus

    Running hooks when incoming migration is finished

    Sorry for being ignorant: Are there any hooks for LXC containers?
  19. rholighaus

    ZFS Replication running away?

    Looks like here is one explanation - gotta dig deeper why there is such a huge compressratio (the sub-volume holds a zimbra file system which uses OpenLDAP, which makes use of sparse files, so there could be one reason): rpool/data/subvol-104-disk-2 refcompressratio 27.51x...
  20. rholighaus

    ZFS Replication running away?

    By now we have reached 770GB transfer: 2019-05-20 17:01:31 104-1: 17:01:31 771G rpool/data/subvol-104-disk-2@__replicate_104-1_1558357140__ 2019-05-20 17:01:32 104-1: 17:01:32 771G rpool/data/subvol-104-disk-2@__replicate_104-1_1558357140__ 2019-05-20 17:01:33 104-1: 17:01:33 771G...