Shrink ZFS disk

plokko

Active Member
Jul 27, 2018
18
8
43
37
Hi.
I'm running on Proxmox VE 5.1 on a single node, the filesystem is on ZFS (ZRAID1) and all the VM disks are on local zfs pools.
When trying to expand the vm 100 disk from 80 to 160gb i wrote the dimension in Mb instead of Gb so now i have a 80Tb drive instead of a 160Gb ( on a 240gb drive...:eek: ).

I tried to edit the config file directly (/etc/pve/local/qemu-server/100.conf) and now it's correct on the VM options but on the actual VM or the storage section is still displayed 80tb.
Code:
virtio0: local-zfs:vm-100-disk-1,size=120G

How can i fix it?

Thanks.
 
Last edited:
Ok this command should have fixed it:
Code:
$zfs set volsize=120G rpool/data/vm-100-disk-1
 
Correction:
In proxmox now it shows 120Gb both in storage and VM options but the VM (Windows server) still see it at 81tb drive...

---UPDATE:---

Found an easy fix:
i edited the disk from the proxmox panel from "no cache" to "directsync" and now Windows is updated to the correct disk size;
i don't know if it's repeatable but it worked.
 
Last edited:
Basic guide (may need improvments and it's not failproof) to shrink zfs disks:

1. prepare your vm disk for shrinking and shut it down
2. in pve shell set new zfs pool size
Code:
$zfs set volsize=<new size>G rpool/data/vm-<vm id>-disk-<disk number>
3. edit vm config in /etc/pve/local/qemu-server/<vm id>.conf on line
Code:
virtio0: local-zfs:vm-<vm id>-disk-<disk number>,size=<new size>G
Obviusly the line may not be the same, just edit the disk size
4. in pve panel change the cache mode to something else and then revert back, this should update the config

Note: you may corrupt your partition table, to fix it in linux boot on a recovery live image like gparted and launch gdisk
Code:
$ gdisk /dev/vda
and then press v,x,e,w and y.
This should do the trick.
 
Hey, thank you :). It solved my problem which was nearly the same. I wanted to resize the disk to 50GB but forgot to write only the amount to add.

One thing to mention:
It's better to edit the
Code:
/etc/pve/qemu-server/<vm id>.conf
and not the local one. In a clustered environment the files in local are synced with the ones from pve, which results in a overwrite of the local version or the other nodes don't see the resize.
 
Old thread but I'm adding some more info. This worked well for me on a Windows server VM. I had issues getting windows diskmgmt.msc to see the file size reduction changes. This process worked fine. I didn't have to reboot the windows server, nor did I have any data loss.

  • Initial disk size: 1500
  • In the GUI I meant to add 500G, but instead I set a final size of 2TB, it ended up as: 3500GB
  • I want a final size of: 2000
  • zfs get volsize vmpool/vm-100-disk-1
NAME PROPERTY VALUE SOURCE vmpool/vm-100-disk-1 volsize 3.42T local
  • zfs set volsize=1999G vmpool/vm-100-disk-1
NAME PROPERTY VALUE SOURCE vmpool/vm-100-disk-1 volsize 1.95T local
  • Edit the proxmox qemu conf file to be the correct size
  • vim /etc/pve/qemu-server/100.conf
  • change
    • scsi1: VMpool:vm-100-disk-1,cache=writeback,discard=on,size=3500G
  • to:
    • scsi1: VMpool:vm-100-disk-1,cache=writeback,discard=on,size=1999G
  • click somewhere else in proxmox and come back to hardware to see changes
  • windows dismgmt.msc still doesn't see the change. It shows 2TB of available storage for the disk.
  • In the proxmox GUI, click resize disk and add 1G
  • Windows diskmgmt.msc now shows 500GB unallocated (2TB total)
  • extend the volume.
edit: formatting
 
Last edited:
For a Widows vm, do i need to resize the guest partition before shrink volsize?
I highly recommend you do so; also, take a backup :) If something goes wrong, you can restore it, and try a different method.
If you find difficulties, I recommend downloading an ISO image of Ubuntu, then booting the Windows VM using that ISO image.
gparted has some nice functionality, including moving the Windows restore partition (it is best to shrink the main partition in Windows first, to leave room to move the restore partition.)
 
Basic guide (may need improvments and it's not failproof) to shrink zfs disks:

1. prepare your vm disk for shrinking and shut it down
2. in pve shell set new zfs pool size
Code:
$zfs set volsize=<new size>G rpool/data/vm-<vm id>-disk-<disk number>
3. edit vm config in /etc/pve/local/qemu-server/<vm id>.conf on line
Code:
virtio0: local-zfs:vm-<vm id>-disk-<disk number>,size=<new size>G
Obviusly the line may not be the same, just edit the disk size
4. in pve panel change the cache mode to something else and then revert back, this should update the config

Note: you may corrupt your partition table, to fix it in linux boot on a recovery live image like gparted and launch gdisk
Code:
$ gdisk /dev/vda
and then press v,x,e,w and y.
This should do the trick.
A huge thank you for this!!!

I registered to say that but I'll stick around a bit though as Proxmox is awesome!!
 
What did I do wrong?

Code:
# zfs set volsize=98304G zfs10-pool/subvol-103-disk-0
cannot set property for 'zfs10-pool/subvol-103-disk-0': 'volsize' does not apply to datasets of this type

Code:
# zfs get "all" zfs10-pool/subvol-103-disk-0
NAME                          PROPERTY              VALUE                          SOURCE
zfs10-pool/subvol-103-disk-0  type                  filesystem                     -
zfs10-pool/subvol-103-disk-0  creation              Mon Jul  5  2:19 2021          -
zfs10-pool/subvol-103-disk-0  used                  8.17T                          -
zfs10-pool/subvol-103-disk-0  available             5.03T                          -
zfs10-pool/subvol-103-disk-0  referenced            7.83T                          -
zfs10-pool/subvol-103-disk-0  compressratio         1.01x                          -
zfs10-pool/subvol-103-disk-0  mounted               yes                            -
zfs10-pool/subvol-103-disk-0  quota                 none                           default
zfs10-pool/subvol-103-disk-0  reservation           none                           default
zfs10-pool/subvol-103-disk-0  recordsize            128K                           default
zfs10-pool/subvol-103-disk-0  mountpoint            /zfs10-pool/subvol-103-disk-0  default
zfs10-pool/subvol-103-disk-0  sharenfs              off                            default
zfs10-pool/subvol-103-disk-0  checksum              on                             default
zfs10-pool/subvol-103-disk-0  compression           on                             inherited from zfs10-pool
zfs10-pool/subvol-103-disk-0  atime                 on                             default
zfs10-pool/subvol-103-disk-0  devices               on                             default
zfs10-pool/subvol-103-disk-0  exec                  on                             default
zfs10-pool/subvol-103-disk-0  setuid                on                             default
zfs10-pool/subvol-103-disk-0  readonly              off                            default
zfs10-pool/subvol-103-disk-0  zoned                 off                            default
zfs10-pool/subvol-103-disk-0  snapdir               hidden                         default
zfs10-pool/subvol-103-disk-0  aclmode               discard                        default
zfs10-pool/subvol-103-disk-0  aclinherit            restricted                     default
zfs10-pool/subvol-103-disk-0  createtxg             175224                         -
zfs10-pool/subvol-103-disk-0  canmount              on                             default
zfs10-pool/subvol-103-disk-0  xattr                 sa                             local
zfs10-pool/subvol-103-disk-0  copies                1                              default
zfs10-pool/subvol-103-disk-0  version               5                              -
zfs10-pool/subvol-103-disk-0  utf8only              off                            -
zfs10-pool/subvol-103-disk-0  normalization         none                           -
zfs10-pool/subvol-103-disk-0  casesensitivity       sensitive                      -
zfs10-pool/subvol-103-disk-0  vscan                 off                            default
zfs10-pool/subvol-103-disk-0  nbmand                off                            default
zfs10-pool/subvol-103-disk-0  sharesmb              off                            default
zfs10-pool/subvol-103-disk-0  refquota              17T                            local
zfs10-pool/subvol-103-disk-0  refreservation        none                           default
zfs10-pool/subvol-103-disk-0  guid                  11733771521295071318           -
zfs10-pool/subvol-103-disk-0  primarycache          all                            default
zfs10-pool/subvol-103-disk-0  secondarycache        all                            default
zfs10-pool/subvol-103-disk-0  usedbysnapshots       346G                           -
zfs10-pool/subvol-103-disk-0  usedbydataset         7.83T                          -
zfs10-pool/subvol-103-disk-0  usedbychildren        0B                             -
zfs10-pool/subvol-103-disk-0  usedbyrefreservation  0B                             -
zfs10-pool/subvol-103-disk-0  logbias               latency                        default
zfs10-pool/subvol-103-disk-0  objsetid              2361                           -
zfs10-pool/subvol-103-disk-0  dedup                 off                            default
zfs10-pool/subvol-103-disk-0  mlslabel              none                           default
zfs10-pool/subvol-103-disk-0  sync                  standard                       default
zfs10-pool/subvol-103-disk-0  dnodesize             legacy                         default
zfs10-pool/subvol-103-disk-0  refcompressratio      1.01x                          -
zfs10-pool/subvol-103-disk-0  written               2.25T                          -
zfs10-pool/subvol-103-disk-0  logicalused           8.26T                          -
zfs10-pool/subvol-103-disk-0  logicalreferenced     7.92T                          -
zfs10-pool/subvol-103-disk-0  volmode               default                        default
zfs10-pool/subvol-103-disk-0  filesystem_limit      none                           default
zfs10-pool/subvol-103-disk-0  snapshot_limit        none                           default
zfs10-pool/subvol-103-disk-0  filesystem_count      none                           default
zfs10-pool/subvol-103-disk-0  snapshot_count        none                           default
zfs10-pool/subvol-103-disk-0  snapdev               hidden                         default
zfs10-pool/subvol-103-disk-0  acltype               posix                          local
zfs10-pool/subvol-103-disk-0  context               none                           default
zfs10-pool/subvol-103-disk-0  fscontext             none                           default
zfs10-pool/subvol-103-disk-0  defcontext            none                           default
zfs10-pool/subvol-103-disk-0  rootcontext           none                           default
zfs10-pool/subvol-103-disk-0  relatime              on                             default
zfs10-pool/subvol-103-disk-0  redundant_metadata    all                            default
zfs10-pool/subvol-103-disk-0  overlay               on                             default
zfs10-pool/subvol-103-disk-0  encryption            off                            default
zfs10-pool/subvol-103-disk-0  keylocation           none                           default
zfs10-pool/subvol-103-disk-0  keyformat             none                           default
zfs10-pool/subvol-103-disk-0  pbkdf2iters           0                              default
zfs10-pool/subvol-103-disk-0  special_small_blocks  0                              default
zfs10-pool/subvol-103-disk-0  snapshots_changed     Thu Apr 25  2:58:21 2024       -
zfs10-pool/subvol-103-disk-0  prefetch              all                            default
 
...is used for "ZVOL" = block devices only.

You have a "Dataset" = a directly usable filesystem.

Try zfs set quota= instead. Note that there is also "refquota" with a slightly different meaning. Take a look into the man page or ask a search engine of your choice regarding the difference :cool:
 
...is used for "ZVOL" = block devices only.

You have a "Dataset" = a directly usable filesystem.

Try zfs set quota= instead. Note that there is also "refquota" with a slightly different meaning. Take a look into the man page or ask a search engine of your choice regarding the difference :cool:
That works, sort of ...
Code:
# zfs set quota=10300G zfs10-pool/subvol-103-disk-0
# zfs get quota zfs10-pool/subvol-103-disk-0
NAME                          PROPERTY  VALUE  SOURCE
zfs10-pool/subvol-103-disk-0  quota     10.1T  local
I had to edit 103.conf and change the disk size manually
Code:
rootfs: zfs16Tr10:subvol-103-disk-0,size=10.1T
But pct rescan puts it back to the old size.
Now the storage specification is unchanged, and potentially allocating new disks will be a problem
103-disk.png

So I have to change
Code:
# zfs set refquota=10300G zfs10-pool/subvol-103-disk-0
# zfs get refquota zfs10-pool/subvol-103-disk-0
NAME                          PROPERTY  VALUE  SOURCE
zfs10-pool/subvol-103-disk-0  quota     10.1T  local
# pct rescan
And the storage definition is fixed.
Documentation says:
Code:
quota=size|none
       Limits the amount of space a dataset and its descendents can consume.  This property enforces a hard limit on the amount of space used.  This includes all space consumed by descendents, including file systems and snapshots.
       Setting a quota on a descendent of a dataset that already has a quota does not override the ancestor's quota, but rather imposes an additional limit.

       Quotas cannot be set on volumes, as the volsize property acts as an implicit quota.

refquota=size|none
       Limits the amount of space a dataset can consume.  This property enforces a hard limit on the amount of space used.  This hard limit does not include space used by descendents, including file systems and snapshots.

It's not clear why the refquota is needed for this and is counter intuitive. Is there an explanation as to why limiting the dataset and not it's descendants is required and could there be any side effects to my limiting the descendants?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!