Shrink RAW Disk LVM

daftlink

Member
Mar 14, 2021
20
0
6
30
Hello,
I'm trying to reduce the virtual disk défined in my local-lvm storage. I succeed to shrink properly inside the host the partition.
I want to set vm-101-disk-0 to 10G.

Code:
lvs -a
  LV              VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  base-105-disk-0 pve Vri---tz-k   32.00g data
  data            pve twi-aotz-- <794.79g             59.92  3.06
  [data_tdata]    pve Twi-ao---- <794.79g
  [data_tmeta]    pve ewi-ao----    8.11g
  [lvol0_pmspare] pve ewi-------    8.11g
  root            pve -wi-ao----   96.00g
  swap            pve -wi-ao----    8.00g
  vm-100-disk-0   pve Vwi-aotz--   10.00g data        85.40
  vm-100-disk-1   pve Vwi-aotz--  650.00g data        56.24
  vm-100-disk-2   pve Vwi-aotz--   11.00g data        73.17
  vm-100-disk-3   pve Vwi-aotz--    3.00g data        48.06
  vm-101-disk-0   pve Vwi-aotz--   15.00g data        61.54
  vm-102-disk-0   pve Vwi-aotz--   10.00g data        41.48
  vm-102-disk-1   pve Vwi-aotz--  100.00g data        45.04
  vm-104-disk-0   pve Vwi-aotz--   32.00g data        64.27

After successfully reduce the partition size inside the VM via RescueCD .

I'm running the following command on Proxmox:

Code:
root@nuc:~# e2fsck -fy /dev/pve/vm-101-disk-0
e2fsck 1.44.5 (15-Dec-2018)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/pve/vm-101-disk-0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Found a gpt partition table in /dev/pve/vm-101-disk-0

Here I don't understand why it found a GPT partition as it is Linux EXT4.

To be sure to not erase some data I tried to set with 12GB

Code:
root@nuc:~# lvresize -L 12G /dev/pve/vm-101-disk-0
  WARNING: Reducing active and open logical volume to 12.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce pve/vm-101-disk-0? [y/n]: n
  Logical volume pve/vm-101-disk-0 NOT reduced.

root@nuc:~# qm rescan

After this step my disk are corrupted so I have to start again.

Here some information from my VM:
Code:
$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                         8:0    0   15G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0   10G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   10G  0 lvm  /

$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               1,9G     0  1,9G   0% /dev
tmpfs                              394M  740K  393M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  9,8G  6,1G  3,3G  66% /
tmpfs                              2,0G     0  2,0G   0% /dev/shm
tmpfs                              5,0M     0  5,0M   0% /run/lock
tmpfs                              2,0G     0  2,0G   0% /sys/fs/cgroup
/dev/sda2                          976M  299M  610M  33% /boot
tmpfs                              394M     0  394M   0% /run/user/1000

$ fdisk -l
Disk /dev/sda: 15 GiB, 16106127360 bytes, 31457280 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: BE688D1E-F78C-4BFB-9E58-C28B100E3637

Device       Start      End  Sectors Size Type
/dev/sda1     2048     4095     2048   1M BIOS boot
/dev/sda2     4096  2101247  2097152   1G Linux filesystem
/dev/sda3  2101248 23066623 20965376  10G Linux filesystem


Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 9,102 GiB, 10733223936 bytes, 20963328 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Any idea ?
 
Last edited:
Ext4 is thw filesystem of your partition. GPT is your partition table that makes it possible to create partition. So you always need GPT or MBR.

And lvresize warned you that the partition you want to resize is mounted and in use. If you want to resize that LV you need to do that while it is unmounted to not corrupt your data. So make sure your VM isnt running.
 
The VM was already shutdown however I forgot to disable the partition with lvchange explaining the warning message. I also detached the disk before the operation. But it didn't change the result, the VM is corrupted and displayed the following message "the volume group is not found".

The structure inside the VM is :
Code:
$ sudo parted /dev/sda print free
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 16,1GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
        17,4kB  1049kB  1031kB  Free Space
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  1076MB  1074MB  ext4
 3      1076MB  11,8GB  10,7GB
        11,8GB  16,1GB  4296MB  Free Space

I reduced from 15.00Gib to 13.00Gib with the following command:
Code:
root@nuc:~# lvreduce -L 13G /dev/pve/vm-101-disk-0
  Size of logical volume pve/vm-101-disk-0 changed from 15.00 GiB (3840 extents) to 13.00 GiB (3328 extents).
  Logical volume pve/vm-101-disk-0 successfully resized.
 
Last edited:
As I see the phsical disk or its partition has an LVM. This LVM volume contains your VM disk.
This VM disk has partitions and LVM as well.
But you reduced the VM disk size without reduce the partition inside the VM disk, didn't you?

It means your partition is corrupt inside the VM disk. And the LVM is corrupt as well, because it was the last partition in the chain.

It is not easy to save the data from this disk.
I in first time make a backup from this LVM volume with dd
Bash:
dd if=/dev/mapper/vm-101-disk-0 of=/tmp/backup_vm_101_0.raw
After this step create a new VM with local disk that located in a directore storage.
Copy this raw file /storage_path/image/{vm_id}/vm-{vm_id}....
increase this disk with proxmox gui to the orignal size.
Say a pray and start the VM.
You can see the result on the console.
You can play with this VM. For example add recovery CD to this VM.

Next time don't forget create backup before reduce a disk.
 
As I see the phsical disk or its partition has an LVM. This LVM volume contains your VM disk.
This VM disk has partitions and LVM as well.
But you reduced the VM disk size without reduce the partition inside the VM disk, didn't you?

It means your partition is corrupt inside the VM disk. And the LVM is corrupt as well, because it was the last partition in the chain.

It is not easy to save the data from this disk.
I in first time make a backup from this LVM volume with dd
Bash:
dd if=/dev/mapper/vm-101-disk-0 of=/tmp/backup_vm_101_0.raw
After this step create a new VM with local disk that located in a directore storage.
Copy this raw file /storage_path/image/{vm_id}/vm-{vm_id}....
increase this disk with proxmox gui to the orignal size.
Say a pray and start the VM.
You can see the result on the console.
You can play with this VM. For example add recovery CD to this VM.

Next time don't forget create backup before reduce a disk.
No worries, I had already a backup of my VM. I don't want to increase the VM disk but to shrink it.

I already decreased inside the VM the LVM partition (see the last output in my first post)

The issue is when I reduced the disk size in proxmox it will corrupt the VM partition saying that the volume group is not found. I think that the issue is related to the fact that I can not do e2fsck after reducing the disk size from proxmox.

Since I have not idea how can I handle this.
 
Did you ever find a solution for this? I'm in the exact same boat. I resized my partition in gparted *inside the guest VM* to 18GB. I'd like to shrink the LV in Proxmox down to 20GB (just to be safe), from 35GB.

When I try to reduce the virtual disk in proxmox with `lvreduce`, the VM ends up corrupt, and I end up falling into the initramfs command line when trying to boot it.

I also get the same output from e2fsck:

Code:
root@proxmox:/dev/pve# e2fsck -f /dev/pve/vm-102-disk-0
e2fsck 1.46.2 (28-Feb-2021)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/pve/vm-102-disk-0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Found a gpt partition table in /dev/pve/vm-102-disk-0

Which lead me to this forum post.
 
Did you ever find a solution for this? I'm in the exact same boat. I resized my partition in gparted *inside the guest VM* to 18GB. I'd like to shrink the LV in Proxmox down to 20GB (just to be safe), from 35GB.

When I try to reduce the virtual disk in proxmox with `lvreduce`, the VM ends up corrupt, and I end up falling into the initramfs command line when trying to boot it.

I also get the same output from e2fsck:

Code:
root@proxmox:/dev/pve# e2fsck -f /dev/pve/vm-102-disk-0
e2fsck 1.46.2 (28-Feb-2021)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/pve/vm-102-disk-0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Found a gpt partition table in /dev/pve/vm-102-disk-0

Which lead me to this forum post.

Hey @justynnuff ,

The only workaround is to switch to convert my virtual disk to QCOW2 format then follow the Proxmox Documentation.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!