Hi,
I have create a cluster with two proxmox node that use a mirroring storage built with drbd (I use this guide (http://pve.proxmox.com/wiki/DRBD to create a shared storage).
My drdb file conf r0.res is this:
resource r0 {
protocol C;
startup {
wfc-timeout 10;
degr-wfc-timeout 10;
outdated-wfc-timeout 10;
become-primary-on both;
}
net {
max-buffers 8192;
max-epoch-size 8192;
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
unplug-watermark 128;
}
disk {
on-io-error detach;
no-disk-barrier;
no-disk-flushes;
no-md-flushes;
}
syncer {
al-extents 3389;
rate 110M;
}
on virtual1 {
device /dev/drbd0;
disk /dev/mapper/pve-drbd;
address 10.0.0.1:7788;
meta-disk internal;
}
on virtual2 {
device /dev/drbd0;
disk /dev/mapper/pve-drbd;
address 10.0.0.2:7788;
meta-disk internal;
}
}
Now, I want expand the drbd disk without shutdown the VMs that are running in the cluster.
I have seen that I can use two method:
1- add disks in the physical server, setup a new drdb device on both machines, create a new resource (for example r1) and add the new disk (lvm) with proxmox webgui. With this solution all work fine.
2- add disks in the physical server and expand the dev/pve/drbd volume. With this solution I have some problem.
If you consider that new disk is /dev/sdb and the relative partition /dev/sdb1, the procedure that I have used is this:
pvcreate /dev/sdb1 -> create a new physical disk
vgextend pve /dev/sdb1 -> extend VG pve
lvextend -L+100G /dev/pve/data -> extend LV /dev/pve/data
##Set a drbd node in secondary state
drdbadm secondary r0
##Run resize command in the other node (node in primary state)
drbdadm resize r0 -> Resize primary DRBD resource
resize2fs /dev/drbd0 -> Resize filesystem
The problem is that when I runnig resize2fs /dev/drbd0 command I obtain this error message and the size in the drbd disk not change:
resize2fs: Bad magic number in super-block while trying to open /dev/drbd0
Couldn't find valid filesystem superblock.
Or this error:
resize2fs: Device or resource busy while trying to open /dev/drbd0 Couldn't find valid filesystem superblock.
Can you give me some tips to resolve this problem?
I have seen that if I shutdown all my VMs and instead to use resize2fs command I use the pvresize /dev/drbd0 in one node, the size of the
drdb disk is expanded correctly but in this way I must poweroff all my VM.
There is other way to expand the size of the my drbd disk without shutdown the VM in the cluster?
Thanks you in advance.
Best regards
I have create a cluster with two proxmox node that use a mirroring storage built with drbd (I use this guide (http://pve.proxmox.com/wiki/DRBD to create a shared storage).
My drdb file conf r0.res is this:
resource r0 {
protocol C;
startup {
wfc-timeout 10;
degr-wfc-timeout 10;
outdated-wfc-timeout 10;
become-primary-on both;
}
net {
max-buffers 8192;
max-epoch-size 8192;
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
unplug-watermark 128;
}
disk {
on-io-error detach;
no-disk-barrier;
no-disk-flushes;
no-md-flushes;
}
syncer {
al-extents 3389;
rate 110M;
}
on virtual1 {
device /dev/drbd0;
disk /dev/mapper/pve-drbd;
address 10.0.0.1:7788;
meta-disk internal;
}
on virtual2 {
device /dev/drbd0;
disk /dev/mapper/pve-drbd;
address 10.0.0.2:7788;
meta-disk internal;
}
}
Now, I want expand the drbd disk without shutdown the VMs that are running in the cluster.
I have seen that I can use two method:
1- add disks in the physical server, setup a new drdb device on both machines, create a new resource (for example r1) and add the new disk (lvm) with proxmox webgui. With this solution all work fine.
2- add disks in the physical server and expand the dev/pve/drbd volume. With this solution I have some problem.
If you consider that new disk is /dev/sdb and the relative partition /dev/sdb1, the procedure that I have used is this:
pvcreate /dev/sdb1 -> create a new physical disk
vgextend pve /dev/sdb1 -> extend VG pve
lvextend -L+100G /dev/pve/data -> extend LV /dev/pve/data
##Set a drbd node in secondary state
drdbadm secondary r0
##Run resize command in the other node (node in primary state)
drbdadm resize r0 -> Resize primary DRBD resource
resize2fs /dev/drbd0 -> Resize filesystem
The problem is that when I runnig resize2fs /dev/drbd0 command I obtain this error message and the size in the drbd disk not change:
resize2fs: Bad magic number in super-block while trying to open /dev/drbd0
Couldn't find valid filesystem superblock.
Or this error:
resize2fs: Device or resource busy while trying to open /dev/drbd0 Couldn't find valid filesystem superblock.
Can you give me some tips to resolve this problem?
I have seen that if I shutdown all my VMs and instead to use resize2fs command I use the pvresize /dev/drbd0 in one node, the size of the
drdb disk is expanded correctly but in this way I must poweroff all my VM.
There is other way to expand the size of the my drbd disk without shutdown the VM in the cluster?
Thanks you in advance.
Best regards