[SOLVED] Very strange zfs size after resotre from backup

VGusev2007

Renowned Member
May 24, 2010
107
12
83
Russia
Dear all, I have some VMs like this one:
  • Source srv: pve 4.1.22
  • Dest srv: pve 5.1.36
vm config disk:
virtio0: exchange:3031/vm-3031-disk-1.qcow2,size=45G

Real size:
root@pve-03:~# qemu-img info /mnt/pve/exchange/images/3031/vm-3031-disk-1.qcow2
image: /mnt/pve/exchange/images/3031/vm-3031-disk-1.qcow2
file format: qcow2
virtual size: 45G (48318382080 bytes)
disk size: 45G

root@pve-03:~# du -sh /mnt/pve/exchange/images/3031/vm-3031-disk-1.qcow2
45G /mnt/pve/exchange/images/3031/vm-3031-disk-1.qcow2

root@pve-03:~# du -sh --apparent-size /mnt/pve/exchange/images/3031/vm-3031-disk-1.qcow2
45G /mnt/pve/exchange/images/3031/vm-3031-disk-1.qcow2

After restore VM on a Dest srv from backup I see the following:


vm config disk:
virtio0: local-zfs:vm-20000-disk-1,size=45G

zfs-data:
root@dve-01:~# zfs list -o name,used,refer,volsize,volblocksize,written -r tank
NAME USED REFER VOLSIZE VOLBLOCK WRITTEN
tank 577G 96K - - 0
tank/vm-20000-disk-1 71.4G 25.0G 45G 8K 0
I have 71.4G, and real size: 25.0G. What are bad? I have sparce and compression zfs pool.

It is for all my VMs when I move it to a new proxmox.


It is the same problem like this:

https://forum.proxmox.com/threads/zfs-pool-not-showing-correct-usage.31111/ ?

My zpool is:

zfs raid 10, ashift=12, compression=on, volblocksize=8k, sparce enabled=on
 
Last edited:
tank/vm-20000-disk-1 71.4G 25.0G 45G 8K 0

71.4G - disk usage including snapshots.
25G - partition data size
45G - "partition" disk size.
Dear thank a lot for you answer to me! I don't have snaphost. Thank for you explanation! I suggest I have this one because I have 8k block size with "raid10" of zfs? I want to convert my zvol to 128k and test performance again.

Are all your disk drives 4k sector size? Using ashift=12 on 512 sector size data use more space then on shift=9
Yes, I understand that. Thank you.

I will make a new test of my Windows+mssql and compare the performance. My Windows has 4k sector size. I don't know what about best settings for it.
 
I`m not talking about VM OS filesystem. I`m talking about ZFS pool sector size (ashift) You cannot change it after pool is created. It impact data size and I/O performance.

If you want to change zvol volblocksize do it like this:
1. Create new zvol
2. Clone the data with dd ( dd if=/dev/zvol/pool/zvol1 of=/dev/zvol/pool/zvol2 )
3. Rename new zvol name or edit VM config file.
 
Oh, yeah I have ashift=12 with 512 (native) sector size of my hdd.

I have this one drive: HGST_HUS722T2TALA604

cat /sys/block/sda/queue/physical_block_size
512

I will recreate zpool with ashift=9, and test it again. Thank you!


Can you explain to me about different size of my VM in compare with block size?

tank/vm-20001-disk-2_old1 58.3G with 8kb
tank/vm-20001-disk-2 18.0G with 128к

I'm shocked...


THANK!

He-he... Just googled: compression works just fine but you are getting a lot of overhead by padding with such small volblocksize set.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!