How to resize a VM's disk

lumox

Member
May 29, 2020
111
5
23
42
Hi everyone,
I'd like to resize my window 10 VM's disk space.
It is now 80GB and I want to reduce it to 30 GB.
I have already shrinked the volume in the windows 10 guest with its own partition tool and the disk is now only 26,68 GB.

My windows 10 vm setup:

Code:
root@pve:~# qm config 101
agent: 1
balloon: 1024
bootdisk: scsi0
cores: 2
ide1: ISOs:iso/virtio-win-0.1.173.iso,media=cdrom,size=384670K
ide2: none,media=cdrom
memory: 3072
name: Windows10
net0: virtio=72:BC:46:2E:67:77,bridge=vmbr1
numa: 0
ostype: win10
scsi0: VMs:vm-101-disk-0,cache=writeback,size=80G
scsihw: virtio-scsi-pci
smbios1: uuid=3300f60c-c594-4b17-8381-8998b9952eef
sockets: 1
vmgenid: 18059270-8e19-46c7-848d-185a4668981c

Could you please help me figure it out?
Thanks
 
Expanding a disk can be done easily via the GUI. Reducing the size of a VM disk is a lot more trickier and therefore not available via the GUI.

By shrinking the volume inside the VM, you already made the first step. I hope you now see a lot of unpartioned space at the end of the disk within the VM.

The next step is to resize the storage on which the VM disk is stored. This depends highly on the type of storage.

What kind of storage is the "VMs" storage?
 
The next step is to resize the storage on which the VM disk is stored. This depends highly on the type of storage.

What kind of storage is the "VMs" storage?

What do you mean exactly? How can I see that?
Thanks
 
What do you mean exactly? How can I see that?
Is it ZFS, LVM or something else? If you go to Datacenter -> Storage you will see a column that shows the type of the storage.
 
Is it ZFS, LVM or something else? If you go to Datacenter -> Storage you will see a column that shows the type of the storage.

it is a LVM.
Meanwhile I ran:

lvdisplay

then:

lvresize --size -30GB /dev/VMs/vm-101-disk-0

and I got it

again:

lvdisplay

Code:
--- Logical volume ---
  LV Path                /dev/VMs/vm-101-disk-0
  LV Name                vm-101-disk-0
  VG Name                VMs
  LV UUID                08jXgc-0SpA-SG3R-Pvvd-Cng0-9AAR-xEX3kR
  LV Write Access        read/write
  LV Creation host, time pve, 2020-11-20 11:28:04 +0100
  LV Status              available
  # open                 0
  LV Size                30.00 GiB
  Current LE             7680
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

It is now 30.00 GiB but I still see 80 G it in the its hardware setup:

2020-11-20 16_20_40-pve - Proxmox Virtual Environment.jpg

How comes?
Do I need to edit the 101.conf file in /etc/pve/nodes/pve/qemu-server?
Thanks
 
Last edited:
  • Like
Reactions: tbond
It is now 30.00 GiB but I still see 80 G it in the its hardware setup:
That value is not updated right away once you resize the volume. You can run qm rescan which will search for unused volumes and will update disk sizes. For more information check the man page with man qm on the CLI or in the docs https://pve.proxmox.com/pve-docs/qm.1.html
 
  • Like
Reactions: tbond and lumox
Hello,

I tried this guide and it didn't work, I must be doing something wrong.
I am just trying in lab conditions so no problem into losing data, and starting all over

What I did:
  • Created a LVM disk for a Linux VM. 20GB one.
  • Formated as EXT4 in Linux VM, mounted and added some test dirs (only using some KB).
  • Shutdown VM
  • Detach the LVM from Linux VM
  • Assign to another VM, a Gparted VM
  • In Gparted, resized the filesystem to 1GB, and commit changes. 1GB partition, 19GB unallocated, OK
  • shutdown, detach, attach to Linux VM
  • Launched Linux, mounted this FS. The disk is still seen as 20GB but the FS as 1GB. My test files are there. OK
  • Shutdown VM, detached the LVM.
  • Via Proxmox Shell, shrunk the partition to 2GB (just in case, bigger than 1GB filesystem. Command used
  • Code:
    lvresize --size 2GB /dev/pve/vm-101-disk-1
  • attached to Gparted VM
  • booted and Gparted said the filesystem is corrupted, it can only see a 2GB unallocated partition
  • tried to "save data" or some option in Gparted, ust to see what would happen, with no results. couldn't find filesystem
What am I missing here?
It seems the problem is the shrinking, I think I destroyed the FS into doing that
how can I SAFELY shrink the LVM and keeping the FS intact?
keep in mind I tried to shrink to 2GB when the FS was only 1GB, just to have some extra margin of safety, and it didn't work!

thank you!
 
  • Like
Reactions: toomuchrego
Hello,

I tried this guide and it didn't work, I must be doing something wrong.
I am just trying in lab conditions so no problem into losing data, and starting all over

What I did:
  • Created a LVM disk for a Linux VM. 20GB one.
  • Formated as EXT4 in Linux VM, mounted and added some test dirs (only using some KB).
  • Shutdown VM
  • Detach the LVM from Linux VM
  • Assign to another VM, a Gparted VM
  • In Gparted, resized the filesystem to 1GB, and commit changes. 1GB partition, 19GB unallocated, OK
  • shutdown, detach, attach to Linux VM
  • Launched Linux, mounted this FS. The disk is still seen as 20GB but the FS as 1GB. My test files are there. OK
  • Shutdown VM, detached the LVM.
  • Via Proxmox Shell, shrunk the partition to 2GB (just in case, bigger than 1GB filesystem. Command used
  • Code:
    lvresize --size 2GB /dev/pve/vm-101-disk-1
  • attached to Gparted VM
  • booted and Gparted said the filesystem is corrupted, it can only see a 2GB unallocated partition
  • tried to "save data" or some option in Gparted, ust to see what would happen, with no results. couldn't find filesystem
What am I missing here?
It seems the problem is the shrinking, I think I destroyed the FS into doing that
how can I SAFELY shrink the LVM and keeping the FS intact?
keep in mind I tried to shrink to 2GB when the FS was only 1GB, just to have some extra margin of safety, and it didn't work!

thank you!
Developers of Proxmox, It seems common bug now for users who choose UEFI bios in Proxmox(instead of SeaBIOS). At least I have to choose UEFI because my virtual disks for VMs are bigger than 2TB. And I have the same issue when attempt of shrinking VM disk on common LVM storage of Proxmox.
No issues with SeaBIOS, only UEFI..is it going to be fixed anyway?
 
Setup: Proxmox 7; Disk model: QEMU HARDDISK; Ubuntu 20.04.3 LTS



Expand the QEMU HARDDISK file via Proxmox as per Proxmox VM disk resize steps (VM->Hardware->Hard Disk->Disk Action->Resize) and then inside the VM:



$ sudo fdisk -l



See which partition is the current Ubuntu setup - should be obvious based on size (in my case was sda2)



$ sudo growpart /dev/sda 2



# NB: space between `partition` (/dev/sda) and `id` (2)!



Resize:

$ sudo resize2fs /dev/sda2

# NB: *no space* between partition and id!
 
  • Like
Reactions: ke5han

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!