Search results

  1. D

    ifdownup2 breaks network on nodes.

    Mmm, nothing obvious which would prevent ifupdown2 to handle it. If you still have them arround, you should check the logs at the time the network wouldn't come up to see what went wrong
  2. D

    ifdownup2 breaks network on nodes.

    can you share your /etc/network/interfaces ? I've switched to ifupdown2 on a 5 node cluster, with very little issues (and a lot of new possiblities)
  3. D

    ifdownup2 breaks network on nodes.

    Would you happen to have OVS bridges/bonds ? ifupdown2 does not support OVS (yet)
  4. D

    two nodes cluster with differents iscsi portals

    Well, multipath should help here. Configure a shared storage with all the portals. Each node will only be able to reach it via its own path only (or paths), but globaly, evey node can access the same storage
  5. D

    How to disable anonymous relay to the Intranet

    That's not an open relay. That's how an MX is supposed to work.
  6. D

    isci multipath vs bonding 802.3ad

    LACP will only use a single link for one tcp connection, no matter how many links you bond together. With lots a connections the load will be spread. But if you have a single iSCSI lun (with LVM on top), then there's a single connection from pve to your san, so it will only ne able to use the...
  7. D

    ZFS, iSCSI, and multipathd

    First, it would need to be done as pve for now uses the qemu stack (which is probably a non trivial amount of work). Then, there would be issues with LVM filtering which you should always adjust to prevent the host from scanning guests volumes. Then also all of the advantages listed here...
  8. D

    ZFS, iSCSI, and multipathd

    Well, there are two ways to add mpio support. First would be to switch to the host iscsi stack, instead of using the one from qemu. But this would have a few drawbacks. The second is to add mpio support in qemu stack. And this requires either libiscsi or qemu to implement it. I'm not aware of...
  9. D

    ZFS, iSCSI, and multipathd

    Don't know, but not very relevant as they are using a completly different stack.
  10. D

    ZFS, iSCSI, and multipathd

    With zfs over iscsi, qemu is managing the iscsi connection, without the host OS being involved at all. Multipath support would require either libiscsi or qemu itself to handle mpio (just as multipathd does for the kernel). Looks like libiscsi will not add it, as it would make it less portable...
  11. D

    ZFS, iSCSI, and multipathd

    Afaik no, because native iscsi support in qemu lacks multipath (limitation from qemu, not proxmox)
  12. D

    Building ProxMox on Devuan

    I dont think that's possible as proxmox uses several systemd features
  13. D

    [SOLVED] Failure to Load LVM2/LVM-Thin after reboot

    Looks like a corrupted LVM thin pool. You'll have a boot into rescue mode and try to fix it with lvconvert --repair (hoping it'll be able to fix, it's not always the case)
  14. D

    Moving disk from NFS to Ceph hangs

    I still think there's a bug. A slow ceph cluster should still be usable and not hang like this
  15. D

    HA cluster....what`s the role

    A "clean" reboot is not something unexpected. You can migrate resources before rebooting the node. So it won't affect HA at all. An unexpeced reboot, a crash, a non responding node is something completly different, and this is where the Proxmox HA stack will come to play, and recover resources...
  16. D

    HA cluster....what`s the role

    When you reboot cleanly one node, the default behavior is to freeze it's resources (this can be configured, see man datacenter.cfg). HA in proxmox is made to recover from unexpected node issue. Try unplugging the network cable of a node, and you should see the node being self fenced, and it's HA...
  17. D

    Permanently Disable Cluster Quorum Requirement

    I don't think there's a supported way to do this. For this use case, it's recommended to run independant (non clustered) Proxmox. There's just too much risk of split brain and corruptions if you operate on a non quorate node. This is not specific to Proxmox though.
  18. D

    [SOLVED] snapshot stopping VM

    The equivalent on EL systems is /etc/sysconfig/qemu-ga
  19. D

    Permanently Disable Cluster Quorum Requirement

    You still need to be quorate to prevent split brains on /etc/pve filesystem