resize windows kvm vm (LVM)

chrisalavoine

Renowned Member
Sep 30, 2009
152
0
81
Hi all,

I need to resize (enlarge) one of my VM's.

I have a 2 node cluster with DRBD, all my VM's are running in an LVM volume.

I've read a few threads about converting qcow2 to raw and then resizing, but how would I do this from an LVM volume?

Any help much appreciated.
Regards,
Chris.
 
Hopefully none.

I guess that is my question really. Will it cause me any headaches? This is a high profile production VM and I don't want to kill it.

c:)
 
I had to resize disk image stored on FC lvm (3 machines cluster) so I run lvresize on master, volume was resized ok but web interface would still show old size, then I migrated this kvm container to another cluster node and after that disk size was shown properly. After few minutes machine crashed, I've pushed reset button on web interface but container would not boot as it saw old (smaller) disk size and complained that root partition is bigger then the disk. After migrating container back to the master node it started ok and saw disk size after resize. Did I did something wrong?
 
Is that error reproducible?

Yes, running lvextend on master (container was stopped) does not disk size on web interface. If I run this command on master and container runs on different node logical volume size is not changed.
 
Yes, running lvextend on master (container was stopped) does not disk size on web interface.

Yes, that is because the pve tools can't know if you manually resize something.

If I run this command on master and container runs on different node logical volume size is not changed.

In PVE 2.0 you can use clvm, which solves this problem.
 
Yes, that is because the pve tools can't know if you manually resize something.



In PVE 2.0 you can use clvm, which solves this problem.

CLVM does not support snapshots which are very usefull for backup, do You have any workaround for this problem? The only solution I found is to disable clustering on LVM volume group before doing snapshot and re-enable it after it's done but I'm not sure if it's safe when few CLVM cluster members are doing it at the same time, one can create snapshot in the area where another cluster member just created another snapshot. If proxmox would do clvm snapshots by disabling cluster flag it would need to do it cluster wide, so that no other node would be modifying volumes when this flag is disabled.
 
Ah, good to know. So maybe it is best to run clvm, but do not set the cluster flag.

I don't think that would work. AFAIK if lvm volume group is going to be shared among it cluster flag must be set, lvm will treat it as shared and synchronize metadata across cluster. The trick with disabling this flag works because clustering is required only for lvm metadata sync so If You disable cluster flag and make some metadata related changes only on one node (like creation of snapshot) and revert those changes (remove snapshot after backup) metadata would remain in sync on all cluster nodes.
 
... and revert those changes (remove snapshot after backup) metadata would remain in sync on all cluster nodes.

What happens when you try to make a snapshot when the cluster flag is set? Does that trigger an error - or is it just not recommended (because clvm does not support distributed snapshots)?
 
I am trying to resize a kvm disk using proxmox 2.x used

lvresize -L +10G /dev/mapper/vg1-vm--105--disk--1

and it worked as a lvs shows the newly resized disk, however when i try to use resize2fs i get error

resize2fs /dev/mapper/vg1-vm--105--disk--1
resize2fs 1.41.12 (17-May-2010)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/vg1-vm--105--disk--1

Of course the VM 105 does not see the extra space, i am reading here about clvm but cannot seem to make it work mostly for clusters i think, and i only have one server right not all vm's are on local host for now.

Can some here please help me to make my kvm virtual harddisk larger, hoping this will be automated within version 2.x once the backup and restore features are available
 
resize2fs /dev/mapper/vg1-vm--105--disk--1
resize2fs 1.41.12 (17-May-2010)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/vg1-vm--105--disk--1

I assume you partitioned the volume inside the VM? So you need to resize the fs on the correct partition instead.
 
Last edited by a moderator:
Yes of course i tried resizing the harddisk inside the virtual machine here is a summary of what i tried.
On the Proxmox physical machine
Show the original size
# lvs
vm-105-disk-1 vg1 -wi-ao 5.00g

Do the resize
# lvresize -L +10G /dev/mapper/vg1-vm--105--disk--1

Show the new size
# lvs
vm-105-disk-1 vg1 -wi-ao 15.00g

Try to resize the file system on the lvm
# resize2fs /dev/mapper/vg1-vm--105--disk--1
resize2fs 1.41.12 (17-May-2010)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/vg1-vm--105--disk--1
Couldn't find valid filesystem superblock.

Try to resize the filesystem on the lvm a different way sometime linux is picky
# resize2fs /dev/vg1/vm-105-disk-1
resize2fs 1.41.12 (17-May-2010)
resize2fs: Bad magic number in super-block while trying to open /dev/vg1/vm-105-disk-1
Couldn't find valid filesystem superblock.

****** Ok since the above is not working try to resize the filesystem on the Virtual Machine******

Show the original virtual hard drive size
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 4.7G 4.5G 8.8M 100% /

Try to resize the virtual harddisk
# resize2fs /dev/sda1
resize2fs 1.41.11 (14-Mar-2010)
The filesystem is already 1239808 blocks long. Nothing to do!