drbd with only 2 disks

mrshark

Member
Sep 22, 2009
32
0
6
i'm new on this community, hi to all :p

i'm trying pve and it's a breeze to setup, now i'm trying clustering and i need some help

to do my tests, i've 2 identical machines, with a core2 duo, 4gb ram, 300gb sata hd, 2 gigabit nics, 1 of these nics is linked directly into the corresponding one on the other node, and i like to use this nic to do clustering and drbd sync

first: created cluster on node1 and joined it from node2. So, is it ok that they do clustering on the public interface, and not on their private one (the one with direct link, as explained above? Using public interface make them depend on the external switch, i'd like they to mantain state and sync even if switch goes down... how to establish cluster on private eth1 nic?

second: setup is automatic on partitioning, i read that i could pass some command line arguments to the installer to make root and swap partition a little smaller, but the remaing space is all assigned as logical volume /dev/pve/data, and i'd like to have this on drbd... how to modify partitioning so this is on drbd, or could i install drbd onto logical volume? which is best and how to do?

thanks a lot for any answer, and best regards... ;)
 
please, for version 2.0, consider a custom partitioning type, a one that leaves most of the disk unused, and a little utility that install on this part of the disk a drbd or a standard lvm, as now, but AFTER installation...

so, with 2 basic servers with hw raid 1, everybody could build a 2 node cluster with drbd, having raid 1 on every node, and one (similar, i know...) between them thanks to drbd...

without having to use a disk for drbd and a disk for system, leaving this one almost empty, on every node...

with HA in version 2.0, PVE will ROCK (even more...), so... :)
thanks
 
Last edited:
as i've a 300gb disk on every machine, i need to use them, not with the standard partitioning...

in fact, standard partitioning leaves me with a / partition of 80gb, mostly unused, and a /var/lib/vz logical volume of 220gb...

i'm trying to do what i sayd before, so:
tar cvzfp pve-data.tgz /var/lib/vz
umount /var/lib/vz
lvremove /dev/pve/data
pvresize --setphysicalvolumesize 80G /dev/sda2

now i've:
PV Name /dev/sda2
VG Name pve
PV Size 80.00 GB

but fdisk STILL sees /dev/sda2 as about all the disk, with a /dev/dm-1 of 80gb... i need to reduce the sda2 to the inner size of 80gb, leaving the remaining of the disk empty, so i could use it as drbd...

is there a way to do this, without loosing internal data, and without adding a second disk to the lvm, moving volumes there, detaching first disk, repartitioning it, removing data there, etc etc etc?

it would be so much simpler if partitioning was not automatic... if i have 2 disks of 300gb, i prefer to put them in raid1, instead of having 1 of them for system and one for drbd...
 
as i've a 300gb disk on every machine, i need to use them, not with the standard partitioning...

in fact, standard partitioning leaves me with a / partition of 80gb, mostly unused, and a /var/lib/vz logical volume of 220gb...

i'm trying to do what i sayd before, so:
tar cvzfp pve-data.tgz /var/lib/vz
umount /var/lib/vz
lvremove /dev/pve/data
pvresize --setphysicalvolumesize 80G /dev/sda2

now i've:
PV Name /dev/sda2
VG Name pve
PV Size 80.00 GB

but fdisk STILL sees /dev/sda2 as about all the disk, with a /dev/dm-1 of 80gb... i need to reduce the sda2 to the inner size of 80gb, leaving the remaining of the disk empty, so i could use it as drbd...

is there a way to do this, without loosing internal data, and without adding a second disk to the lvm, moving volumes there, detaching first disk, repartitioning it, removing data there, etc etc etc?

it would be so much simpler if partitioning was not automatic... if i have 2 disks of 300gb, i prefer to put them in raid1, instead of having 1 of them for system and one for drbd...

if you think manual partitioning is easier in your case just install a standard Lenny and on top Proxmox VE.

see http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny
 
i went with the lenny-way... installed standard version (16gb ext3 /, 4gb swap, no lvm, remaining disk available), then added pve repository and followed instructions...

only some quirks: after pve installation, and following reboot, the eth cards where renamed as following (info from "dmesg |grep -i eth"):
node1: eth0--> eth3, eth1-->eth4
node2: eth0--> eth4, eth1-->eth3
don't know why, but so networking was gone, until i corrected /etc/network/interfaces

other than this, i followed drbd guide on this site, used that drbd.conf and now i need to continue, making first time sync and go on...

is that drbd.conf optimal, or "enhanceable" in any way? I mean: if 1 node goes down, i only need to go on the other and make it restart the vm?

Is it NEEDED to eventually promote the alive node to master with pveca -m?

If the alive node is rebooted, does it start normally (not finding the other, especially the drbd), or it hangs or wait a long?

When i rebuild the "dead" node, how to readd it to the cluster? I know this:
1) the alive become master: pveca -m
2) the "reborn" node joins the cluster
3) i recreate the drbd on reborn node
4) NOW, HOW to copy data from alive to reborn???

thanks for your time, and sorry if these info are already elsewhere...

p.s.: an other thing: pve iso installer does not permit netmask other then /24... at work we have 10.0.0.0 with nm 255.0.0.0, but installer modifies my nm as 255.255.255.0 as i was writing... had to modify /etc/network/interfaces manually, after installation...
 
Hi,
sorry but i'm not a profi in drbd - so i can't give usefull hints (i made only basic tests with it).
But for your network-devices (it's only cosmetic) you can look in
Code:
/etc/udev/rules.d/70-persistent-net.rules
Perhaps your first installation use an other networkdriver so proxmox add two new lines (for eth3+4). In this case, delete the lines (eth0-4) and after a reboot you should have eth0 and eth1 back.

Udo
 
i think documentation needs some little additions:

http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny#Install_Proxmox_VE_packages

nowhere is indicated to install LVM2, as the suggested Lenny standard install does not include it... and ssh access is surely also useful, so, add:

Code:
apt-get install openssh-server lvm2

http://pve.proxmox.com/wiki/DRBD#LVM_configuration

lvm, as it resides ON drbd, must start AFTER it! So:

Code:
update-rc.d -f lvm2 remove
update-rc.d lvm2 start 80 2 3 4 5 . stop 20 0 1 6 .
 
i think documentation needs some little additions:

http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny#Install_Proxmox_VE_packages

nowhere is indicated to install LVM2, as the suggested Lenny standard install does not include it... and ssh access is surely also useful, so, add:

Code:
apt-get install openssh-server lvm2
http://pve.proxmox.com/wiki/DRBD#LVM_configuration

lvm, as it resides ON drbd, must start AFTER it! So:

Code:
update-rc.d -f lvm2 remove
update-rc.d lvm2 start 80 2 3 4 5 . stop 20 0 1 6 .

just to make clear: this is not a supported way of installation and its for expert use only. so it makes no sense to explain how to install ssh or lvm.

but anyway, feel free to add it to the wiki, its open for everyone, just register. I will review it if you add something, thanks.
 
just to make clear: this is not a supported way of installation and its for expert use only. so it makes no sense to explain how to install ssh or lvm.

but anyway, feel free to add it to the wiki, its open for everyone, just register. I will review it if you add something, thanks.
done ;)
 

I review your changes in the wiki and did some changes, thanks for your contribution.

But I reverted the changes in the DRBD page as its not needed what you describe.

Just a guess, did you get the right lvm.conf filter rule in your tests?
 
seen your mod at Install_Proxmox_VE_on_Debian_Lenny page, but... does SSH install EVEN server part? I think ssh is only client, ssh-server is server, or, at least is so in ubuntu, i think debian is not different...
But I reverted the changes in the DRBD page as its not needed what you describe.
using standard start scripts, i had this situation in /etc/rc3.d, with lvm starting at step 20 and drbd at 70:
Code:
lrwxrwxrwx 1 root root  14 20 nov 18:45 S70drbd -> ../init.d/drbd
lrwxrwxrwx 1 root root  14 23 nov 12:08 S20lvm2 -> ../init.d/lvm2
with my mods, it becomes (lvm at step 80):
Code:
lrwxrwxrwx 1 root root  14 20 nov 18:45 S70drbd -> ../init.d/drbd
lrwxrwxrwx 1 root root  14 23 nov 12:08 S80lvm2 -> ../init.d/lvm2
so, how is this not needed, if lvm starts and it cannot find drbd device containing it? i'm confused, does lvm rescan devices even after it's started, without user force it with a reload/restart? :confused:
Just a guess, did you get the right lvm.conf filter rule in your tests?
i commented out the standard filter and added my own, as described on wiki page (my drbd partition is sda3):
Code:
    # By default we accept every block device:
    # filter = [ "a/.*/" ]
    filter = [ "r|/dev/sda3|", "r|/dev/disk/|", "r|/dev/block/|", "a/.*/" ]
 
Last edited:
Last edited:
so, how is this not needed, if lvm starts and it cannot find drbd device containing it? i'm confused, does lvm rescan devices even after it's started, without user force it with a reload/restart?

Yes. Do you have a problem with that?
 
thanks for ssh, i'd better search on my own, before ask... :rolleyes:

and no, if it works even in standard startup sequence, even better... :D

thanks a lot...
 
i'm doing as following:
as i've a 300gb hd, partitioned as:
sda1 16gb / ext3 (even excessive, but it's a test machine, so...)
sda2 4gb swap
sda3 drbd, all remaining free space
and i'm migrating a laptop with a 55gb hd, i've not enough space on / fs to allocate a qcow2 file, to convert after creation to raw format on lvm partition...

so i've created a (larger than the original hd) temporary logical volume, created an ext3 fs on it, mounted on /mnt, created on it a temporary qcow2 file a little smaller (to be sure not to saturate my temp lv...), then exported it via qemu-nbd...

now i'm migrating my test machine via selfimage, and after this i'll convert it on an already created (via standard web proxmox interface) raw disk on lvm on drbd shared storage, with the last following command... after conversion, i'll destroy my temp lv, leaving me with what i need: a converted raw disk... hope i'm not missing anything, i'll update this at the end...

Code:
lvcreate -L 64G -n templv drbdvg
mkfs.ext3 /dev/mapper/drbdvg-templv
mount /dev/mapper/drbdvg-templv /mnt
qemu-img create -f qcow2 /mnt/temp.qcow2 63G
qemu-nbd -t /mnt/temp.qcow2
qemu-img convert /mnt/temp.qcow2 -O raw > /dev/mapper/drbdvg-vm--102--disk--1
 
i'm doing as following:

Code:
...
qemu-img convert /mnt/temp.qcow2 -O raw > /dev/mapper/drbdvg-vm--102--disk--1

Hi,
i think this don't work. qemu-img need a filesystem to write a flat file on it.
Code:
qemu-img convert /mnt/temp.qcow2 -O raw /backup/temp.raw; dd if=/backup/temp.raw of=/dev/mapper/drbdvg-vm--102--disk--1 bs=1024k

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!