Drbd on lvm - resize pv

rbertolini

New Member
Jan 13, 2015
3
0
1
Hi,
I have create a cluster with two proxmox node that use a mirroring storage built with drbd (I use this guide (http://pve.proxmox.com/wiki/DRBD to create a shared storage).
My drdb file conf r0.res is this:
resource r0 {
protocol C;
startup {
wfc-timeout 10;
degr-wfc-timeout 10;
outdated-wfc-timeout 10;
become-primary-on both;
}
net {
max-buffers 8192;
max-epoch-size 8192;
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
unplug-watermark 128;
}
disk {
on-io-error detach;
no-disk-barrier;
no-disk-flushes;
no-md-flushes;
}
syncer {
al-extents 3389;
rate 110M;
}
on virtual1 {
device /dev/drbd0;
disk /dev/mapper/pve-drbd;
address 10.0.0.1:7788;
meta-disk internal;
}
on virtual2 {
device /dev/drbd0;
disk /dev/mapper/pve-drbd;
address 10.0.0.2:7788;
meta-disk internal;
}
}
Now, I want expand the drbd disk without shutdown the VMs that are running in the cluster.
I have seen that I can use two method:
1- add disks in the physical server, setup a new drdb device on both machines, create a new resource (for example r1) and add the new disk (lvm) with proxmox webgui. With this solution all work fine.
2- add disks in the physical server and expand the dev/pve/drbd volume. With this solution I have some problem.


If you consider that new disk is /dev/sdb and the relative partition /dev/sdb1, the procedure that I have used is this:
pvcreate /dev/sdb1 -> create a new physical disk
vgextend pve /dev/sdb1 -> extend VG pve
lvextend -L+100G /dev/pve/data -> extend LV /dev/pve/data
##Set a drbd node in secondary state
drdbadm secondary r0
##Run resize command in the other node (node in primary state)
drbdadm resize r0 -> Resize primary DRBD resource
resize2fs /dev/drbd0 -> Resize filesystem


The problem is that when I runnig resize2fs /dev/drbd0 command I obtain this error message and the size in the drbd disk not change:


resize2fs: Bad magic number in super-block while trying to open /dev/drbd0
Couldn't find valid filesystem superblock.
Or this error:
resize2fs: Device or resource busy while trying to open /dev/drbd0 Couldn't find valid filesystem superblock.
Can you give me some tips to resolve this problem?


I have seen that if I shutdown all my VMs and instead to use resize2fs command I use the pvresize /dev/drbd0 in one node, the size of the
drdb disk is expanded correctly but in this way I must poweroff all my VM.
There is other way to expand the size of the my drbd disk without shutdown the VM in the cluster?


Thanks you in advance.
Best regards
 
...
2- add disks in the physical server and expand the dev/pve/drbd volume. With this solution I have some problem.


If you consider that new disk is /dev/sdb and the relative partition /dev/sdb1, the procedure that I have used is this:
pvcreate /dev/sdb1 -> create a new physical disk
vgextend pve /dev/sdb1 -> extend VG pve
lvextend -L+100G /dev/pve/data -> extend LV /dev/pve/data
##Set a drbd node in secondary state
drdbadm secondary r0
##Run resize command in the other node (node in primary state)
drbdadm resize r0 -> Resize primary DRBD resource
resize2fs /dev/drbd0 -> Resize filesystem


The problem is that when I runnig resize2fs /dev/drbd0 command I obtain this error message and the size in the drbd disk not change:


resize2fs: Bad magic number in super-block while trying to open /dev/drbd0
Couldn't find valid filesystem superblock.
Or this error:
resize2fs: Device or resource busy while trying to open /dev/drbd0 Couldn't find valid filesystem superblock.
Can you give me some tips to resolve this problem?


I have seen that if I shutdown all my VMs and instead to use resize2fs command I use the pvresize /dev/drbd0 in one node, the size of the
drdb disk is expanded correctly but in this way I must poweroff all my VM.
There is other way to expand the size of the my drbd disk without shutdown the VM in the cluster?


Thanks you in advance.
Best regards
Hi,
resize2fs work with ext-X filesystems, but your drbd-volume is an blockdevice like an hdd and not an filesystem!
You have to resize the drbd-device like discribed here:
http://drbd.linbit.com/users-guide-8.3/s-resizing.html
(if you have an underlaying raid-controller, which can resize your drbd-disk).

Udo

Edit: Save way - if you have enough other storage - move all vm-disks to another storage; disable the drbd-vgs with "vgchange -a n ....", remove the drbd-pvs, recreate the drbd-volumes (bigger), ang create the lvm-storage again.
 
Last edited:
Hi Udo,
I can't move all vm-disks to another storage because I have only one storage in the cluster.
I have resize the drdb-device like this procedure http://drbd.linbit.com/users-guide-8.3/s-resizing.html but after the command "drbdadm resize r0", and relative synchronization, the pv (/dev/drbd0) and vg (drbdvg) not change the size.

The list of commands that I used to resize the drbd disk are these (/dev/sdb is the new disk added):
pvcreate /dev/sdb1 -> create a new physical disk (in both nodes)
vgextend pve /dev/sdb1 -> extend VG pve (in both nodes)
lvextend -L+100G /dev/pve/data -> extend LV /dev/pve/data (in both nodes)
##Set one drbd node in secondary state
drdbadm secondary r0
##Run resize command in the other node (node in primary state)
drbdadm resize r0 -> Resize primary DRBD resource, and started the synchronization

The only way to upgrade the pv and vg size after the drbdadm resize r0 command is run pvresize /dev/drbd0, but with this method I must shutdown all VMs in the cluster.
Can you give me other tips to resolve my problem?

Thanks,
Robert
 
I don't use DRBD (I have a Dell MD3200, that's SAS-attached to the nodes), but it should mostly be the same. Remember that it's a dangerous procedure. Always keep backups offline!!!

Firstly, yhou have to stop the VGs ('test', in the example) hosted in the VG to be expanded. So you can't do it online with VMs working.

# vgchange -a n test
0 logical volume(s) in volume group "test" now active

Rescan all LUNs, just to be sure:

# for T in /sys/class/scsi_device/*/device/rescan; do echo "1" > $T; done

I need to tell multipath that the disk is bigger, but you probably can skip this

multipath -r 36d4ae...21381c
reload: 36d4ae...21381c undef DELL,MD32xx
size=20G features=...
...

Now parted can see that the disk is bigger and GPT is not at the end as it should be. Extend the partition:

# parted /dev/mapper/36d4ae...21381c
(parted) p
Error: the backup GPT table is not at the end of the disk, as it should be. ...
...
Fix/Ignore/Cancel? f
Warning: not all the space available to /dev/mapper/36d4ae...21381c appears to be used, ...
Fix/Ignore? f
Model: Linux devie-mapper (multipath) (dm)
Disk /dev/mapper/36d4ae...21381c: 21.5GB
...
(parted) rm 1
(parted) mkpart test 1 100%
(parted) set 1 lvm on

NEVER move the partition start or you'll lose all your data! Too bad the last time I tried parted didn't allow resizing partitions of unknown content.

# pvresize /dev/mapper/36d4ae...21381c
1 physical volume(s) resized / 0 physical volume(s) not resized
# vgdisplay test
...
VG Size 20GiB
...

And now that the PV is resized you can add more LVs (VMs) or extend the existing ones.

Hope it helps.
 
Hi NdK73,
Thank you so much for the reply. Now I do some tests with your tips.

I have seen that if I use pvresize /dev/pve/drbd after the command drbdadm resize r0 the pv and lv on drbd are expanded correctly.
I can run pvresize /dev/pve/drbd online namely with the VMs active in the cluster.
Use this method is a good solution or can I lose data from drbd disk? With my tests I have seen that the VMs haven't problems after this procedure but may be that I ignore some details because I don't know very well drbd environment.

Then, the procedure that I use is:
pvcreate /dev/sdb1 -> create a new physical disk (in both nodes)
vgextend pve /dev/sdb1 -> extend VG pve (in both nodes)
lvextend -L+100G /dev/pve/data -> extend LV /dev/pve/data (in both nodes)
##Set one drbd node in secondary state
drdbadm secondary r0
##Run resize command in the other node (node in primary state)
drbdadm resize r0 -> Resize primary DRBD resource, and started the synchronization
pvresize /dev/pve/drbd -> Resize pv

/dev/pve/drbd is used how disk in drbd file conf (r0.res). You can see r0.res conf in my first post.

Robert
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!