Search results

  1. I

    Using keepalived to access the cluster WI over a single IP

    Thanks for guide, had same idea. Yes native setting at Datacenter level would be very nice, everything required is almost already there.
  2. I

    LDAP Sync with nested Groups

    This is reported as this bug https://bugzilla.proxmox.com/show_bug.cgi?id=2738
  3. I

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    There is this tutorial https://forum.proxmox.com/threads/poc-2-node-ha-cluster-with-shared-iscsi-gfs2.160177/ which I have used to setup 2 node cluster in our lab, FC SAN (all flash storage), GFS2 directly on multipath device (simple setup) From features perspective everything seems to be...
  4. I

    LDAP Sync with nested Groups

    Somehow I managed to get group membership sync working. But permissions are not working correctly. If put permissions for parent group, only direct members of that group have working permissions, members of subgroup are missing permissions for that object
  5. I

    [TUTORIAL] PoC 2 Node HA Cluster with Shared iSCSI GFS2

    Please note "2" on end of line - this should be 0, to disable systemd fsck of gfs2 filesystem. GFS2 fsck must be run only when GFS2 is unmounted on all nodes.
  6. I

    [TUTORIAL] PoC 2 Node HA Cluster with Shared iSCSI GFS2

    I am doing more testing with two_node and seems when one node is unreachable other keeps running, but it is not possible to power on virtual machines from GFS2 volume - probably due dlm locking? Can you confirm this behavior? EDIT: Strange, but I am not able to replicate it any more. Seems it...
  7. I

    [TUTORIAL] PoC 2 Node HA Cluster with Shared iSCSI GFS2

    Yes, this can help in case of iSCSI. Actually I am using FC SAN, so I will try that external quorum device on external VM. Anyway, thanks for guide, first tests shows GFS2 is performing good I will keep validating this design.
  8. I

    [TUTORIAL] PoC 2 Node HA Cluster with Shared iSCSI GFS2

    No, not yet. This will probably prevent node from rebooting? But question is, what will happened when node lost network connectivity? It will try to start virtual machines on both nodes? How GFS2 behaves in this situation? Do we have data corruption when VM try to power on both nodes? Or does...
  9. I

    [TUTORIAL] PoC 2 Node HA Cluster with Shared iSCSI GFS2

    Now question is how to solve quorum issue when one node goes down. Remaining will reboot, since quorum is lost and nothing is working
  10. I

    LDAP Sync with nested Groups

    Do I need some patch or just use filters? I am using 1.2.840.113556.1.4.1941 in filters...Anyway, this is not answer to my question, how users.cfg groups line look like...
  11. I

    LDAP Sync with nested Groups

    When AD group, for example LinuxTeam, contains another group(s), for example OtherGroup, does /etc/pve/user.cfg contains groups on group line? I see only group lines like this group:LinuxTeam:user1@domain,user2@domain without any groups - should it look like this...
  12. I

    Can not create Ceph OSD due to keyring error

    Hi i was facing same problem. Here is solution. /etc/multipath.conf defaults { polling_interval 10 } devices { device { vendor "PURE" product "FlashArray" path_selector "service-time 0"...
  13. I

    [SOLVED] The current guest configuration does not support taking new snapshots

    Facing same problem. Seems gfs2 is working on FC SAN, but again no snapshots - this time for windows VMs due TPM state disks.
  14. I

    Fibre Channel SAN with Live Snapshot

    Any benchmarks from this GFS2 directly on LUN setup?
  15. I

    How to? Create or edit file with qm guest agent

    But this should handle agent itself https://qemu-project.gitlab.io/qemu/interop/qemu-ga-ref.html#qapidoc-42 but there is 48MB limit.
  16. I

    Configure or disallow concurrent backups

    Found this old thread :( No solution yet.
  17. I

    Running VM clonning question

    Do a backup instead of clone is probably solution to my problem.
  18. I

    Running VM clonning question

    Sorry, but your answer does not contain root information - are cloned disks from same point of time, when I clone running vm with multiple disks? I have tried clone from snapshot, but this seem not to be working for ZFS, it gets error 500 (not supported for current disk). I have mysql database...
  19. I

    Running VM clonning question

    I would like to ask how clonning running VM with multiple disks works. Is it like in vmware where when you start clone, all disks are snapshothed so all disks are from same point of time or each disk is from different time, as it was processed - in clone job? Running on ZFS and from job output...