Search results

  1. mir

    LVM on iscsi

    This is true. As soon as you have applied a disk to a vm then the content of the storage shifts from 'none' to 'images' in which case the storage will show as available for 'images'
  2. mir

    zpool trim in Proxmox 6.1

    It is this feature: https://www.illumos.org/issues/1701 which can be found in this commit: https://github.com/openzfs/zfs/commit/1b939560be5c51deecf875af9dada9d094633bf7 But as the documentation states this feature will only be available for SSD's so if the pool does not contain SSD's this...
  3. mir

    Opinions | ZFS On Proxmox

    Another thing to take into consideration when choosing between (stripped) mirrors or (stripped) raidz[x] is the overhead in calculating parity for raidz[x] and especially when doing resilvering of the pool. Calculating parity tend to require a CPU with higher clock speeds since parity...
  4. mir

    Kernel panic after migration from Intel <> AMD

    There is a reason why running different CPU models and generations is unsupported and uncertified on VmWare as well!
  5. mir

    proxmox HA on shared SAN storage

    See this thread: https://forum.proxmox.com/threads/zfs-over-iscsi-network-ha.66137/
  6. mir

    ZFS over iSCSI Network HA

    Use stackable switches and create a LACP with connections to more than one switch (obviously the storage box should likewise have connections to more than one switch) and you should be failure proof.
  7. mir

    ZFS over iSCSI Network HA

    Why not LACP? If one nic fails the take-over will be instantaneously.
  8. mir

    ZFS over iSCSI Network HA

    Yes, it uses the same migration features as any other supported storage in Proxmox. Replication is handled on the storage server, not by Proxmox. What to you mean by 'shared storage controller fails over to another controller'?
  9. mir

    ZFS over iSCSI Network HA

    I wrote the code ;-)
  10. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    I cannot replicate it now. It was must likely a combination of a VM started with different versions of kernel and various packages which for some reason was not able to start again ;-(
  11. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    Last working kernel: pve-kernel-5.3.13-3-pve:amd64 I will try installing this kernel and see if it fixes the problem.
  12. mir

    ZFS over iSCSI Network HA

    Since LACP is a pure network thing and completely unrelated to iSCSI it is working 100%.
  13. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    No, same error: lvchange -ay qnap/vm-153-disk-0 device-mapper: create ioctl on qnap-vm--153--disk--0 LVM-RCevIXI8i5huDYro1QZ0fdlcZWWqYxm7DIe1JfkZcJK5iskK4TCa7rX8g5Kvwi3c failed: Device or resource busy
  14. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    This specific setup has been working unchanged since Proxmox 1.9. Did you test with an LVM marked 'Shared'? esx1:~# lvs |grep qnap vm-153-disk-0 qnap -wi------- 8.00g esx2:~# lvs |grep qnap vm-153-disk-0 qnap -wi-ao---- 8.00g esx1:~# lvdisplay qnap --- Logical volume --- LV Path...
  15. mir

    Previous upgrade seems to have broken online and offline migration with lvm

    Hi all, The previous upgrade seems to have broken online and offline migration with shared lvm storage on iscsi. This is the upgrade: Start-Date: 2020-02-11 01:24:26 Commandline: apt upgrade Requested-By: mir (1000) Install: pve-kernel-5.3.18-1-pve:amd64 (5.3.18-1, automatic) Upgrade...
  16. mir

    proxmox 5.4 to 6.1

    I was following these instructions and it was also these instructions I was referring to.
  17. mir

    proxmox 5.4 to 6.1

    Hi all, Today I took the plunge and upgraded my cluster from 5.4 to 6.1 using apt update, apt dist-upgrade. The upgrade in itself was painless however on small annoyance when upgrading the corosync qdevice package as explained here...