VM disk shrinking (.raw) using ZFS storage

Trigve

New Member
Oct 15, 2016
29
0
1
42
Hi,
I've got a problem shrinking the .raw disk using the ZFS as storage.

I've tried following statements without success
Code:
 :/#qm resize 101 virtio0 10G
unable to skrink disk size
:/# qm resize 101 virtio0 -- -10GB
400 Parameter verification failed.
size: value does not match the regex pattern
qm resize <vmid> <disk> <size> [OPTIONS]
:/# qemu-img resize -f raw /dev/zvol/<removed>/vm-101-disk-2 -10G
Image resized.

The last command print that it succeeded, but the disk size it still same in proxmox gui.

Am I doing something wrong?

edit:
Code:
:/# pveversion -v
proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-1 (running version: 4.3-1/e7cdc165)
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-88
pve-firmware: 1.1-9
libpve-common-perl: 4.0-73
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-61
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.1-6
pve-container: 1.0-75
pve-firewall: 2.0-29
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.4-1
lxcfs: 2.0.3-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
zfsutils: 0.6.5.7-pve10~bpo80

Thank You
 
Thank you for the reply,
but unfortunately it still shows the original size (32GB).
 
Thanks for the reply LnxBil,
but I don't have enough space for the data duplication (in the real situation a need to shrink 1 TB disk).

My situation is to clone the 1 TB disk to the VM disk and then shrink VM disk only to cca 400 GB.
The workaround I found is to use "Directory" (as store) on the ZFS partition, make .raw disk there, clone the whole disk there, shrink the .raw (which does work) to the minimum (no free space), move the disk to the "ZFS" storage, remove the .raw disk, resize the disk (on "ZFS" storage) for some free space.
 
ZFS does not save "free" space. Have you enabled thin provisioning? If not, you can get space by setting refreservation to zero. You can also write zeros to the device and ZFS will free the blocks immediately (if they are not referenced by a snapshot).
 
Thanks for reply, didn't know about the provisioning on the ZFS. But thinking about it more, does it also save space if the zero blocks are scattered around the disk? Because one of the partition is near the end of the disk.

Thank you
 
Zero blocks are not stored, so zeroing free space will yield more free space. Have you checked the *reservation parameters? If it is set, the size of the disk is kind of "preallocated", so thick provisioned. If you change it to thin (resetting *reservation values) you will get the "free" space as "free" space again.
 
After some time shortage to work on my proxmox server, I'm back.

So I tried the ZFS thin provisioning but it somehow isn't working. From 700 GB virtual disk only around 200 GB is used, but in the proxmox storage summary i says it is using whole 700 GB. I've tried zeroing some blocks of the empty space with "dd" but the space still isn't reclaimed. "reservation" parameter is set to "none"
Code:
NAME      PROPERTY              VALUE                  SOURCE
pve_data  type                  filesystem             -
pve_data  creation              Fri Nov 11 20:21 2016  -
pve_data  used                  722G                   -
pve_data  available             177G                   -
pve_data  referenced            19K                    -
pve_data  compressratio         1.00x                  -
pve_data  mounted               yes                    -
pve_data  quota                 none                   default
pve_data  reservation           none                   default
pve_data  recordsize            128K                   default
pve_data  mountpoint            /pve_data              default
pve_data  sharenfs              off                    default
pve_data  checksum              on                     default
pve_data  compression           off                    default
pve_data  atime                 on                     default
pve_data  devices               on                     default
pve_data  exec                  on                     default
pve_data  setuid                on                     default
pve_data  readonly              off                    default
pve_data  zoned                 off                    default
pve_data  snapdir               hidden                 default
pve_data  aclinherit            restricted             default
pve_data  canmount              on                     default
pve_data  xattr                 on                     default
pve_data  copies                1                      default
pve_data  version               5                      -
pve_data  utf8only              off                    -
pve_data  normalization         none                   -
pve_data  casesensitivity       sensitive              -
pve_data  vscan                 off                    default
pve_data  nbmand                off                    default
pve_data  sharesmb              off                    default
pve_data  refquota              none                   default
pve_data  refreservation        none                   default
pve_data  primarycache          all                    default
pve_data  secondarycache        all                    default
pve_data  usedbysnapshots       0                      -
pve_data  usedbydataset         19K                    -
pve_data  usedbychildren        722G                   -
pve_data  usedbyrefreservation  0                      -
pve_data  logbias               latency                default
pve_data  dedup                 off                    default
pve_data  mlslabel              none                   default
pve_data  sync                  standard               default
pve_data  refcompressratio      1.00x                  -
pve_data  written               19K                    -
pve_data  logicalused           564G                   -
pve_data  logicalreferenced     9.50K                  -
pve_data  filesystem_limit      none                   default
pve_data  snapshot_limit        none                   default
pve_data  filesystem_count      none                   default
pve_data  snapshot_count        none                   default
pve_data  snapdev               hidden                 default
pve_data  acltype               off                    default
pve_data  context               none                   default
pve_data  fscontext             none                   default
pve_data  defcontext            none                   default
pve_data  rootcontext           none                   default
pve_data  relatime              on                     temporary
pve_data  redundant_metadata    all                    default
pve_data  overlay               off                    default

Isn't the problem with it, that I missed some option when creating the zpool/zfs?

Thank You
 
Code:
NAME                     USED  AVAIL  REFER  MOUNTPOINT
pve_data                 731G   168G    19K  /pve_data
pve_data/tmp            9.27G   168G  9.27G  /pve_data/tmp
pve_data/vm-100-disk-1   722G   305G   584G  -
pve_data/vm-100-disk-2   384K   168G   137K  -
rpool                   15.1G   210G    96K  /rpool
rpool/ROOT              6.57G   210G    96K  /rpool/ROOT
rpool/ROOT/pve-1        6.57G   210G  6.57G  /
rpool/data                96K   210G    96K  /rpool/data
rpool/swap              8.50G   218G  13.4M  -
 
Code:
pool: pve_data
state: ONLINE
  scan: scrub canceled on Sat Nov 12 13:27:52 2016
config:

        NAME                              STATE     READ WRITE CKSUM
        pve_data                          ONLINE       0     0     0
          mirror-0                        ONLINE       0     0     0
            ata-MB1000GCEEK_WCAW33JYV4PV  ONLINE       0     0     0
            ata-MB1000GCEEK_WCAW33JYVTRP  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0

errors: No known data errors
Code:
NAME                     USED  AVAIL  REFER  MOUNTPOINT
pve_data                 731G   168G    19K  /pve_data
pve_data/tmp            9.27G   168G  9.27G  /pve_data/tmp
pve_data/vm-100-disk-1   722G   305G   584G  -
pve_data/vm-100-disk-2   384K   168G   137K  -
 
What do you mean by "zeroing some blocks"? I always use this:

Code:
dd if=/dev/zero of=/zero bs=1M; sync; sync; sync; rm -f /zero
 
Please try again and also post lsblk inside your VM 100. Maybe you have more than one partition/lvm volume to perform the zeroing on.
 
So I've tried once more and the result is same :) It is interesting, that if I clone the disk on the same zpool storage, than it works ok and the space occupied is only the one without the zero blocks.
The output of lsblk (cannot copy text from vnc):
1.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!