partition fun with drbd

testbot

New Member
Jan 25, 2012
20
0
1
i don't want to get a separate disk for this so i was thinking i can resize the default partitions, create a new one and then sync.

i was able to resize the default data partition using the following:
umount /dev/mapper/pve-data
e2fsck -f /dev/mapper/pve-data
resize2fs /dev/mapper/pve-data 5000M
lvresize -L 5G /dev/mapper/pve-data
e2fsck -f /dev/mapper/pve-data
resize2fs /dev/mapper/pve-data

the next thing i would like to do is create a new partition and mount it to /drbd and add it to the pve group. that's where i'm having issues. does anyone have any tips on how i can create the a partition that's 150G, part of pve, and located on /drbd?
 
i posted too soon lol i figured it out. if anyone else feels like trying this the following worked

umount /dev/mapper/pve-data
e2fsck -f /dev/mapper/pve-data
resize2fs /dev/mapper/pve-data 5000M
lvresize -L 5G /dev/mapper/pve-data
e2fsck -f /dev/mapper/pve-data
resize2fs /dev/mapper/pve-data
lvcreate -L 150G pve -n drbd
mke2fs -j /dev/pve/drbd
echo "/dev/pve/drbd /drbd ext3 defaults 0 1" >> /etc/fstab
mount -a
df -kh
 
if anyone has a good way to shrink sda2 and create sda3 so i can have a separate pve-drbr group that would be cool.
 
i've made it like this:
umount /dev/mapper/pve-data
lvreduce pve-data
new fs on pve-data with mkfs.ext2 dev/mapper/pve-data
now i have enough free space @ pve group.
so
lvcreate lvdrbd in pve
then i've get a

/dev/pve/lvdrbd

and that is the source for the /dev/drbd0

now you can create a new lvm-group on the drbd0-device.

then you have a

lvm (/dev/pve/lvdrbd)-> drbd (source is "/dev/pve/lvdrbd" target is "/dev/drbd0"-> lvm (source is "/dev/drbd0" target is "/dev/your_new_vg/your_new_lv") config.
 
thanks for your reply! very cool. i'm going to try that in a few.

looks like i was close on the first try. i just didn't know how to make a new group so on my second/fourth attempt i was trying to create sda3 which seems to be too much work.
 
when you add the lvm group does it only allow images? that's the only option showing "images" under content.
 
thank you for your reply. i'm confused on how to move or create containers on it then. i think because it says images, when i create an openvz or kvm, it doesn't show the drbd as an optional storage. thoughts?
 
this is annoying. i was going to make drbd a nfs and add it but that would be tied to just one of the nodes IP addresses and defeats HA in the event that node goes down. i don't get the point of adding an lvm group if you can only use it for kvm. kvm is too slow. i think what i'm finding out after 2-3 weeks of testing proxmox is that if you want 2-node high availability it will only work with kvm. if you have several openvz machines you should not waste any time on this until you have 3 or more servers. 2 node opanvz in proxmox ha is a myth. is that correct?
 
Last edited:
Containers only work on a filesystem, so you need a shared filesystem of some sort.

Maybe you could setup GFS on top of DRBD and put the containers on the GFS filesystem.
No idea how to do this and no idea if you can run containers on top of GFS but maybe this would work.
You can find out how to set this up in the DRBD Manual on the DRBD website.

Another idea is to use gluster.
gluster does not perform very well and again no idea if you can run containers on a gluster filesystem.
Here is a thread that covers installing gluster and mounting gluster as if it is an NFS filesystem:
http://forum.proxmox.com/threads/7355-NFS-How-to-use-version-3

This thread seems to indicate using NFS for container storage would work ok so I assume gluster mounted via NFS would work:
http://forum.proxmox.com/threads/8270-Openvz-migration-with-nfs-fileserver
 
Containers only work on a filesystem, so you need a shared filesystem of some sort.

Maybe you could setup GFS on top of DRBD and put the containers on the GFS filesystem.
No idea how to do this and no idea if you can run containers on top of GFS but maybe this would work.
You can find out how to set this up in the DRBD Manual on the DRBD website.

Another idea is to use gluster.
gluster does not perform very well and again no idea if you can run containers on a gluster filesystem.
Here is a thread that covers installing gluster and mounting gluster as if it is an NFS filesystem:
http://forum.proxmox.com/threads/7355-NFS-How-to-use-version-3

This thread seems to indicate using NFS for container storage would work ok so I assume gluster mounted via NFS would work:
http://forum.proxmox.com/threads/8270-Openvz-migration-with-nfs-fileserver


thanks for the reply!

i'm going to check those both out tonight. i did see instructions on the drbd site for gfs but they made it sound like you need all the other redhat/centos cluster goodies. maybe i'll give that a try too and see what i come up with.
 
so i think that's over complicating things.

i think what i want to do is use rgmanager to fence a NSF export that is actually drbd.

node1 and node2
/mount/data -> /dev/drbd0
/mount/data/export -> NFS Export on virtual IP (shared on both hosts)
rgmanager (/etc/pve/cluster.conf) to fense NFS/Virtual IP

add virtual ip/share to proxmox for virtual machines and containers.

i think i know how to create the nfs on top of drbd but i'm not sure about configuring rgmanager to fence the nfs exports with the virtual IP. can anyone give me any help with that part? also, does anyone see any flaws with this theory?
 
i found some examples but they say to remove /etc/exports... i think proxmox uses that file to find the nfs shares, correct?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!