Shrink ZFS disk

plokko

Active Member
Jul 27, 2018
18
7
43
36
Hi.
I'm running on Proxmox VE 5.1 on a single node, the filesystem is on ZFS (ZRAID1) and all the VM disks are on local zfs pools.
When trying to expand the vm 100 disk from 80 to 160gb i wrote the dimension in Mb instead of Gb so now i have a 80Tb drive instead of a 160Gb ( on a 240gb drive...:eek: ).

I tried to edit the config file directly (/etc/pve/local/qemu-server/100.conf) and now it's correct on the VM options but on the actual VM or the storage section is still displayed 80tb.
Code:
virtio0: local-zfs:vm-100-disk-1,size=120G

How can i fix it?

Thanks.
 
Last edited:
Ok this command should have fixed it:
Code:
$zfs set volsize=120G rpool/data/vm-100-disk-1
 
Correction:
In proxmox now it shows 120Gb both in storage and VM options but the VM (Windows server) still see it at 81tb drive...

---UPDATE:---

Found an easy fix:
i edited the disk from the proxmox panel from "no cache" to "directsync" and now Windows is updated to the correct disk size;
i don't know if it's repeatable but it worked.
 
Last edited:
Basic guide (may need improvments and it's not failproof) to shrink zfs disks:

1. prepare your vm disk for shrinking and shut it down
2. in pve shell set new zfs pool size
Code:
$zfs set volsize=<new size>G rpool/data/vm-<vm id>-disk-<disk number>
3. edit vm config in /etc/pve/local/qemu-server/<vm id>.conf on line
Code:
virtio0: local-zfs:vm-<vm id>-disk-<disk number>,size=<new size>G
Obviusly the line may not be the same, just edit the disk size
4. in pve panel change the cache mode to something else and then revert back, this should update the config

Note: you may corrupt your partition table, to fix it in linux boot on a recovery live image like gparted and launch gdisk
Code:
$ gdisk /dev/vda
and then press v,x,e,w and y.
This should do the trick.
 
Hey, thank you :). It solved my problem which was nearly the same. I wanted to resize the disk to 50GB but forgot to write only the amount to add.

One thing to mention:
It's better to edit the
Code:
/etc/pve/qemu-server/<vm id>.conf
and not the local one. In a clustered environment the files in local are synced with the ones from pve, which results in a overwrite of the local version or the other nodes don't see the resize.
 
Old thread but I'm adding some more info. This worked well for me on a Windows server VM. I had issues getting windows diskmgmt.msc to see the file size reduction changes. This process worked fine. I didn't have to reboot the windows server, nor did I have any data loss.

  • Initial disk size: 1500
  • In the GUI I meant to add 500G, but instead I set a final size of 2TB, it ended up as: 3500GB
  • I want a final size of: 2000
  • zfs get volsize vmpool/vm-100-disk-1
NAME PROPERTY VALUE SOURCE vmpool/vm-100-disk-1 volsize 3.42T local
  • zfs set volsize=1999G vmpool/vm-100-disk-1
NAME PROPERTY VALUE SOURCE vmpool/vm-100-disk-1 volsize 1.95T local
  • Edit the proxmox qemu conf file to be the correct size
  • vim /etc/pve/qemu-server/100.conf
  • change
    • scsi1: VMpool:vm-100-disk-1,cache=writeback,discard=on,size=3500G
  • to:
    • scsi1: VMpool:vm-100-disk-1,cache=writeback,discard=on,size=1999G
  • click somewhere else in proxmox and come back to hardware to see changes
  • windows dismgmt.msc still doesn't see the change. It shows 2TB of available storage for the disk.
  • In the proxmox GUI, click resize disk and add 1G
  • Windows diskmgmt.msc now shows 500GB unallocated (2TB total)
  • extend the volume.
edit: formatting
 
Last edited:
  • Like
Reactions: mzdan and hanru
For a Widows vm, do i need to resize the guest partition before shrink volsize?
I highly recommend you do so; also, take a backup :) If something goes wrong, you can restore it, and try a different method.
If you find difficulties, I recommend downloading an ISO image of Ubuntu, then booting the Windows VM using that ISO image.
gparted has some nice functionality, including moving the Windows restore partition (it is best to shrink the main partition in Windows first, to leave room to move the restore partition.)
 
Basic guide (may need improvments and it's not failproof) to shrink zfs disks:

1. prepare your vm disk for shrinking and shut it down
2. in pve shell set new zfs pool size
Code:
$zfs set volsize=<new size>G rpool/data/vm-<vm id>-disk-<disk number>
3. edit vm config in /etc/pve/local/qemu-server/<vm id>.conf on line
Code:
virtio0: local-zfs:vm-<vm id>-disk-<disk number>,size=<new size>G
Obviusly the line may not be the same, just edit the disk size
4. in pve panel change the cache mode to something else and then revert back, this should update the config

Note: you may corrupt your partition table, to fix it in linux boot on a recovery live image like gparted and launch gdisk
Code:
$ gdisk /dev/vda
and then press v,x,e,w and y.
This should do the trick.
A huge thank you for this!!!

I registered to say that but I'll stick around a bit though as Proxmox is awesome!!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!