Hello,
I have configured two-node Proxmox VE 2.0 (3.1-114) Active/Active Cluster used for OpenVZ CTs. Containers located on GFS2 volume on DRBD mounted to local directory on all nodes. Rgmanager part of my cluster.conf is the following:
As can be seen there is three DRBD resources r0, r1, r2 formatted as GFS2 volumes and mounted to /srv/gfs2/shared (VZ Templates), /srv/gfs2/node1 (CTs running on node1), /srv/gfs2/node2 (CTs running on node2).
My /etc/pve/storage.cfg
The last two pvevm resources is CT101 and CT102 added via Proxmox VE WEB GUI.
I have a couple of questions on how to improve this configuration.
1. Proxmox does not know anything about storage availability because local dirs /srv/gfs2/{shared,node1,node2} are always available. Thus one can try start CT or create CT in unmounted local directory. I thinking about implementing custom Rgmanager resource agent based on pvesm (PVE Storage Manager) to auto enable Proxmox storage after GFS2 resource start and disable storage before GFS2 resource stop.
Here is an example:
Using this non-implemented yet pvesm resource agent I can enable/disable pve storage completely or just change a list of valid storage owners.
Is it a good idea or is there a better solution?
2. Currently pvevm resources does not depends on any DRBD or GFS2 resources. Thus Rgmanager does not stop CTs before clusterfs and trying to start CTs while storage service is down/disabled. I tried Active/Passive configuration:
This works as expected on Rgmanager side but Proxmox shows "Managed by HA: No" and "Migration" button does not work correctly because storage is actually mounted on one node only.
How can I make pvevm dependent on GFS2?
I have configured two-node Proxmox VE 2.0 (3.1-114) Active/Active Cluster used for OpenVZ CTs. Containers located on GFS2 volume on DRBD mounted to local directory on all nodes. Rgmanager part of my cluster.conf is the following:
Code:
<rm>
<failoverdomains>
<failoverdomain name="only_node1" restricted="1">
<failoverdomainnode name="node1"/>
</failoverdomain>
<failoverdomain name="only_node2" restricted="1">
<failoverdomainnode name="node2"/>
</failoverdomain>
<failoverdomain name="primary_node1" restricted="1" ordered="1" nofailback="1">
<failoverdomainnode name="node1" priority="1"/>
<failoverdomainnode name="node2" priority="100"/>
</failoverdomain>
<failoverdomain name="primary_node2" restricted="1" ordered="1" nofailback="1">
<failoverdomainnode name="node1" priority="100"/>
<failoverdomainnode name="node2" priority="1"/>
</failoverdomain>
</failoverdomains>
<resources>
<drbd name="drbd_r0" resource="r0"/>
<drbd name="drbd_r1" resource="r1"/>
<drbd name="drbd_r2" resource="r2"/>
<clusterfs name="gfs2_shared" mountpoint="/srv/gfs2/shared" device="/dev/drbd0" fstype="gfs2" options="relatime" force_unmount="1"/>
<clusterfs name="gfs2_node1" mountpoint="/srv/gfs2/node1" device="/dev/drbd1" fstype="gfs2" options="relatime" force_unmount="1"/>
<clusterfs name="gfs2_node2" mountpoint="/srv/gfs2/node2" device="/dev/drbd2" fstype="gfs2" options="relatime" force_unmount="1"/>
</resources>
<service name="storage_shared_on_node1" autostart="1" domain="only_node1" exclusive="0" recovery="restart">
<drbd ref="drbd_r0">
<clusterfs ref="gfs2_shared"/>
</drbd>
</service>
<service name="storage_shared_on_node2" autostart="1" domain="only_node2" exclusive="0" recovery="restart">
<drbd ref="drbd_r0">
<clusterfs ref="gfs2_shared"/>
</drbd>
</service>
<service name="storage_node1_on_node1" autostart="1" domain="only_node1" exclusive="0" recovery="restart">
<drbd ref="drbd_r1">
<clusterfs ref="gfs2_node1"/>
</drbd>
</service>
<service name="storage_node1_on_node2" autostart="1" domain="only_node2" exclusive="0" recovery="restart">
<drbd ref="drbd_r1">
<clusterfs ref="gfs2_node1"/>
</drbd>
</service>
<service name="storage_node2_on_node1" autostart="1" domain="only_node1" exclusive="0" recovery="restart">
<drbd ref="drbd_r2">
<clusterfs ref="gfs2_node2"/>
</drbd>
</service>
<service name="storage_node2_on_node2" autostart="1" domain="only_node2" exclusive="0" recovery="restart">
<drbd ref="drbd_r2">
<clusterfs ref="gfs2_node2"/>
</drbd>
</service>
<pvevm vmid="101" domain="primary_node1" autostart="1"/>
<pvevm vmid="102" domain="primary_node2" autostart="1"/>
</rm>
As can be seen there is three DRBD resources r0, r1, r2 formatted as GFS2 volumes and mounted to /srv/gfs2/shared (VZ Templates), /srv/gfs2/node1 (CTs running on node1), /srv/gfs2/node2 (CTs running on node2).
My /etc/pve/storage.cfg
Code:
dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0
dir: gfs2_shared
path /srv/gfs2/shared
shared
content iso,vztmpl
maxfiles 1
nodes node1,node2
dir: gfs2_node1
path /srv/gfs2/node1
shared
content images,rootdir
maxfiles 1
nodes node1,node2
dir: gfs2_node2
path /srv/gfs2/node2
shared
content images,rootdir
maxfiles 1
nodes node1,node2
The last two pvevm resources is CT101 and CT102 added via Proxmox VE WEB GUI.
I have a couple of questions on how to improve this configuration.
1. Proxmox does not know anything about storage availability because local dirs /srv/gfs2/{shared,node1,node2} are always available. Thus one can try start CT or create CT in unmounted local directory. I thinking about implementing custom Rgmanager resource agent based on pvesm (PVE Storage Manager) to auto enable Proxmox storage after GFS2 resource start and disable storage before GFS2 resource stop.
Here is an example:
Code:
<resources>
<drbd name="drbd_r0" resource="r0"/>
<clusterfs name="gfs2_shared" mountpoint="/srv/gfs2/shared" device="/dev/drbd0" fstype="gfs2" options="relatime" force_unmount="1"/>
<pvesm name="pves_shared" storage="shared">
</resources>
<service name="storage_shared_on_node1" autostart="1" domain="only_node1" exclusive="0" recovery="restart">
<drbd ref="drbd_r0">
<clusterfs ref="gfs2_shared">
<pvesm ref="pves_shared"/>
</clusterfs>
</drbd>
</service>
<service name="storage_shared_on_node2" autostart="1" domain="only_node2" exclusive="0" recovery="restart">
<drbd ref="drbd_r0">
<clusterfs ref="gfs2_shared">
<pvesm ref="pves_shared"/>
</clusterfs>
</drbd>
</service>
Using this non-implemented yet pvesm resource agent I can enable/disable pve storage completely or just change a list of valid storage owners.
Is it a good idea or is there a better solution?
2. Currently pvevm resources does not depends on any DRBD or GFS2 resources. Thus Rgmanager does not stop CTs before clusterfs and trying to start CTs while storage service is down/disabled. I tried Active/Passive configuration:
Code:
<service name="storage_node1_on_node1" autostart="1" domain="primary_node1" exclusive="0" recovery="restart">
<drbd ref="drbd_r1">
<clusterfs ref="gfs2_node1">
<pvevm vmid="101"/>
</clusterfs>
</drbd>
</service>
<service name="storage_node2_on_node2" autostart="1" domain="primary_node2" exclusive="0" recovery="restart">
<drbd ref="drbd_r2">
<clusterfs ref="gfs2_node2">
<pvevm vmid="102"/>
</clusterfs>
</drbd>
</service>
This works as expected on Rgmanager side but Proxmox shows "Managed by HA: No" and "Migration" button does not work correctly because storage is actually mounted on one node only.
How can I make pvevm dependent on GFS2?
Last edited: