just in case someone stumbles across this... the ability to create loop devices inside openvz containers was disabled due to a security flaw. doesn't have anything to do with proxmox.
so i think that's over complicating things.
i think what i want to do is use rgmanager to fence a NSF export that is actually drbd.
node1 and node2
/mount/data -> /dev/drbd0
/mount/data/export -> NFS Export on virtual IP (shared on both hosts)
rgmanager (/etc/pve/cluster.conf) to fense...
thanks for the reply!
i'm going to check those both out tonight. i did see instructions on the drbd site for gfs but they made it sound like you need all the other redhat/centos cluster goodies. maybe i'll give that a try too and see what i come up with.
this is annoying. i was going to make drbd a nfs and add it but that would be tied to just one of the nodes IP addresses and defeats HA in the event that node goes down. i don't get the point of adding an lvm group if you can only use it for kvm. kvm is too slow. i think what i'm finding out...
thank you for your reply. i'm confused on how to move or create containers on it then. i think because it says images, when i create an openvz or kvm, it doesn't show the drbd as an optional storage. thoughts?
yea, the host2 was a typo. sorry.
i might have created this thread too soon. i added post-up echo 1 > /proc/sys/net/ipv4/conf/all/proxy_arp to the interfaces and it seems to be working so far. :D i was confused because our hosting provider insisted we added vmbr0:1, vmbr0:2... for each ip. i...
the more research i do the more i think this has to do with arp cache. if one vm goes down in a HA cluster, does it do anything to refresh the arp cache, like arping?
sorry, networking isn't my strongest area.
i have an extra /29 and our provider says we have to bind them w/virtual nics. to make them portable between the two hosts. is it required to have portable IPs for HA to work?
they want this:
vmbr1:0
address 192.1.233.112
netmask...
any updates on this? i can't get drbd to install no matter what i try. i even tried to use the following to make my own dep but then i get a "Can no load the drdb module" error.
i get the same error as above (trying to overwrite '/usr/share/cluster/drbd.sh', which is also in package...
would it be useful to have a cron that runs once per minute with 'date > /var/log/date' (use log rotate or something so it doesn't get huge) so if in the above situation you will have a date stamp on both systems and know which one was up last?
thanks for your reply! very cool. i'm going to try that in a few.
looks like i was close on the first try. i just didn't know how to make a new group so on my second/fourth attempt i was trying to create sda3 which seems to be too much work.
i posted too soon lol i figured it out. if anyone else feels like trying this the following worked
umount /dev/mapper/pve-data
e2fsck -f /dev/mapper/pve-data
resize2fs /dev/mapper/pve-data 5000M
lvresize -L 5G /dev/mapper/pve-data
e2fsck -f /dev/mapper/pve-data
resize2fs /dev/mapper/pve-data...
i don't want to get a separate disk for this so i was thinking i can resize the default partitions, create a new one and then sync.
i was able to resize the default data partition using the following:
umount /dev/mapper/pve-data
e2fsck -f /dev/mapper/pve-data
resize2fs /dev/mapper/pve-data...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.