ZFS using too much diskspace

sahostking

Renowned Member
Hi

Restored multiple VMS but for some reason 1 VM is giving me some issues restoring from a LVM backup to ZFS pool.

Untitled.png

LVM is 800GB but when restored on ZFS server it shows it using way more than that:

root@vz-jhb-4:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.20T 571G 192K /rpool
rpool/ROOT 1.19G 571G 192K /rpool/ROOT
rpool/ROOT/pve-1 1.19G 571G 1.19G /
rpool/data 1.19T 571G 192K /rpool/data
rpool/data/vm-104-disk-1 156G 571G 156G -
rpool/data/vm-104-disk-2 1014G 571G 1014G -
rpool/data/vm-104-disk-3 45.3G 571G 45.3G -
rpool/swap 8.50G 580G 6.09M -

What could be the reason
This is the only server with the issue. All other servers seem fine.
 
and here is the backup restore log: shows 800GB restored not over 1TB.

DEV: dev_id=1 size: 107374182400 devname: drive-virtio0
DEV: dev_id=2 size: 644245094400 devname: drive-virtio1
DEV: dev_id=3 size: 107374182400 devname: drive-virtio2
CTIME: Sat Jun 25 00:00:03 2016
new volume ID is 'local-zfs:vm-104-disk-1'
map 'drive-virtio0' to '/dev/zvol/rpool/data/vm-104-disk-1' (write zeros = 0)
new volume ID is 'local-zfs:vm-104-disk-2'
map 'drive-virtio1' to '/dev/zvol/rpool/data/vm-104-disk-2' (write zeros = 0)
new volume ID is 'local-zfs:vm-104-disk-3'
map 'drive-virtio2' to '/dev/zvol/rpool/data/vm-104-disk-3' (write zeros = 0)
progress 1% (read 8589934592 bytes, duration 27 sec)
progress 2% (read 17179869184 bytes, duration 51 sec)
progress 3% (read 25769803776 bytes, duration 108 sec)
progress 4% (read 34359738368 bytes, duration 181 sec)
progress 5% (read 42949672960 bytes, duration 241 sec)
progress 6% (read 51539607552 bytes, duration 293 sec)
progress 7% (read 60129542144 bytes, duration 342 sec)
progress 8% (read 68719476736 bytes, duration 402 sec)
progress 9% (read 77309411328 bytes, duration 466 sec)
progress 10% (read 85899345920 bytes, duration 531 sec)
progress 11% (read 94489280512 bytes, duration 629 sec)
progress 12% (read 103079215104 bytes, duration 751 sec)
progress 13% (read 111669149696 bytes, duration 842 sec)
progress 14% (read 120259084288 bytes, duration 901 sec)
progress 15% (read 128849018880 bytes, duration 959 sec)
progress 16% (read 137438953472 bytes, duration 1007 sec)
progress 17% (read 146028888064 bytes, duration 1075 sec)
progress 18% (read 154618822656 bytes, duration 1123 sec)
progress 19% (read 163208757248 bytes, duration 1170 sec)
progress 20% (read 171798691840 bytes, duration 1222 sec)
progress 21% (read 180388626432 bytes, duration 1263 sec)
progress 22% (read 188978561024 bytes, duration 1308 sec)
progress 23% (read 197568495616 bytes, duration 1365 sec)
progress 24% (read 206158430208 bytes, duration 1430 sec)
progress 25% (read 214748364800 bytes, duration 1492 sec)
progress 26% (read 223338299392 bytes, duration 1538 sec)
progress 27% (read 231928233984 bytes, duration 1597 sec)
progress 28% (read 240518168576 bytes, duration 1671 sec)
progress 29% (read 249108103168 bytes, duration 1724 sec)
progress 30% (read 257698037760 bytes, duration 1776 sec)
progress 31% (read 266287972352 bytes, duration 1859 sec)
progress 32% (read 274877906944 bytes, duration 1923 sec)
progress 33% (read 283467841536 bytes, duration 1992 sec)
progress 34% (read 292057776128 bytes, duration 2039 sec)
progress 35% (read 300647710720 bytes, duration 2097 sec)
progress 36% (read 309237645312 bytes, duration 2167 sec)
progress 37% (read 317827579904 bytes, duration 2222 sec)
progress 38% (read 326417514496 bytes, duration 2285 sec)
progress 39% (read 335007449088 bytes, duration 2368 sec)
progress 40% (read 343597383680 bytes, duration 2430 sec)
progress 41% (read 352187318272 bytes, duration 2497 sec)
progress 42% (read 360777252864 bytes, duration 2567 sec)
progress 43% (read 369367187456 bytes, duration 2631 sec)
progress 44% (read 377957122048 bytes, duration 2692 sec)
progress 45% (read 386547056640 bytes, duration 2758 sec)
progress 46% (read 395136991232 bytes, duration 2812 sec)
progress 47% (read 403726925824 bytes, duration 2871 sec)
progress 48% (read 412316860416 bytes, duration 2976 sec)
progress 49% (read 420906795008 bytes, duration 3027 sec)
progress 50% (read 429496729600 bytes, duration 3119 sec)
progress 51% (read 438086664192 bytes, duration 3223 sec)
progress 52% (read 446676598784 bytes, duration 3352 sec)
progress 53% (read 455266533376 bytes, duration 3470 sec)
progress 54% (read 463856467968 bytes, duration 3549 sec)
progress 55% (read 472446402560 bytes, duration 3649 sec)
progress 56% (read 481036337152 bytes, duration 3744 sec)
progress 57% (read 489626271744 bytes, duration 3815 sec)
progress 58% (read 498216206336 bytes, duration 3882 sec)
progress 59% (read 506806140928 bytes, duration 3929 sec)
progress 60% (read 515396075520 bytes, duration 3986 sec)
progress 61% (read 523986010112 bytes, duration 4082 sec)
progress 62% (read 532575944704 bytes, duration 4149 sec)
progress 63% (read 541165879296 bytes, duration 4202 sec)
progress 64% (read 549755813888 bytes, duration 4258 sec)
progress 65% (read 558345748480 bytes, duration 4331 sec)
progress 66% (read 566935683072 bytes, duration 4418 sec)
progress 67% (read 575525617664 bytes, duration 4524 sec)
progress 68% (read 584115552256 bytes, duration 4595 sec)
progress 69% (read 592705486848 bytes, duration 4654 sec)
progress 70% (read 601295421440 bytes, duration 4713 sec)
progress 71% (read 609885356032 bytes, duration 4782 sec)
progress 72% (read 618475290624 bytes, duration 4837 sec)
progress 73% (read 627065225216 bytes, duration 4908 sec)
progress 74% (read 635655159808 bytes, duration 4971 sec)
progress 75% (read 644245094400 bytes, duration 5023 sec)
progress 76% (read 652835028992 bytes, duration 5078 sec)
progress 77% (read 661424963584 bytes, duration 5139 sec)
progress 78% (read 670014898176 bytes, duration 5201 sec)
progress 79% (read 678604832768 bytes, duration 5269 sec)
progress 80% (read 687194767360 bytes, duration 5329 sec)
progress 81% (read 695784701952 bytes, duration 5378 sec)
progress 82% (read 704374636544 bytes, duration 5447 sec)
progress 83% (read 712964571136 bytes, duration 5508 sec)
progress 84% (read 721554505728 bytes, duration 5570 sec)
progress 85% (read 730144440320 bytes, duration 5637 sec)
progress 86% (read 738734374912 bytes, duration 5691 sec)
progress 87% (read 747324309504 bytes, duration 5740 sec)
progress 88% (read 755914244096 bytes, duration 5792 sec)
progress 89% (read 764504178688 bytes, duration 5810 sec)
progress 90% (read 773094113280 bytes, duration 5818 sec)
progress 91% (read 781684047872 bytes, duration 5827 sec)
progress 92% (read 790273982464 bytes, duration 5836 sec)
progress 93% (read 798863917056 bytes, duration 5844 sec)
progress 94% (read 807453851648 bytes, duration 5853 sec)
progress 95% (read 816043786240 bytes, duration 5863 sec)
progress 96% (read 824633720832 bytes, duration 5870 sec)
progress 97% (read 833223655424 bytes, duration 5885 sec)
progress 98% (read 841813590016 bytes, duration 5888 sec)
progress 99% (read 850403524608 bytes, duration 5901 sec)
progress 100% (read 858993459200 bytes, duration 5916 sec)
total bytes read 858993459200, sparse bytes 88827543552 (10.3%)
space reduction due to 4K zero blocks 1.1%
TASK OK
 
I have a similar experience, yet I do not know exactly what is going on there.

Please post (in a CODE-Tag)

Code:
zfs get all rpool/data/vm-1004-disk-2
 
Code:
NAME  PROPERTY  VALUE  SOURCE
rpool/data/vm-105-disk-2  type  volume  -
rpool/data/vm-105-disk-2  creation  Wed Jun 29  6:42 2016  -
rpool/data/vm-105-disk-2  used  1013G  -
rpool/data/vm-105-disk-2  available  574G  -
rpool/data/vm-105-disk-2  referenced  1013G  -
rpool/data/vm-105-disk-2  compressratio  1.18x  -
rpool/data/vm-105-disk-2  reservation  none  default
rpool/data/vm-105-disk-2  volsize  600G  local
rpool/data/vm-105-disk-2  volblocksize  8K  -
rpool/data/vm-105-disk-2  checksum  on  default
rpool/data/vm-105-disk-2  compression  lz4  inherited from rpool
rpool/data/vm-105-disk-2  readonly  off  default
rpool/data/vm-105-disk-2  copies  1  default
rpool/data/vm-105-disk-2  refreservation  none  default
rpool/data/vm-105-disk-2  primarycache  all  default
rpool/data/vm-105-disk-2  secondarycache  all  default
rpool/data/vm-105-disk-2  usedbysnapshots  0  -
rpool/data/vm-105-disk-2  usedbydataset  1013G  -
rpool/data/vm-105-disk-2  usedbychildren  0  -
rpool/data/vm-105-disk-2  usedbyrefreservation  0  -
rpool/data/vm-105-disk-2  logbias  latency  default
rpool/data/vm-105-disk-2  dedup  off  default
rpool/data/vm-105-disk-2  mlslabel  none  default
rpool/data/vm-105-disk-2  sync  disabled  inherited from rpool
rpool/data/vm-105-disk-2  refcompressratio  1.18x  -
rpool/data/vm-105-disk-2  written  1013G  -
rpool/data/vm-105-disk-2  logicalused  593G  -
rpool/data/vm-105-disk-2  logicalreferenced  593G  -
rpool/data/vm-105-disk-2  snapshot_limit  none  default
rpool/data/vm-105-disk-2  snapshot_count  none  default
rpool/data/vm-105-disk-2  snapdev  hidden  default
rpool/data/vm-105-disk-2  context  none  default
rpool/data/vm-105-disk-2  fscontext  none  default
rpool/data/vm-105-disk-2  defcontext  none  default
rpool/data/vm-105-disk-2  rootcontext  none  default
rpool/data/vm-105-disk-2  redundant_metadata  all  default
 
I think the reason is a lot of unmapped space not properly released by the zvol. Try mounting the disk using the scsi disk interface and the virtio-scsi controller. After that run fstrim inside the VM which should release all space marked as free but not freed by the zvol yet.
 
Thanks

Does ZFS not just take space for every VM? Also when I set it up with proxmox ISO I used 6 x 500GB disks and it gave me a 1.74 GB rpool, So I assume it took the 260GB for some maintenance or other reason?

Also whenever I add a VM say 50GB I notice it takes 60GB or more? Very strange.

I noticed this though hence maybe just incorrect reporting?

root@vz-jhb-3:~# zpool get free
NAME PROPERTY VALUE SOURCE
rpool free 816G -


differs from 481GB:

root@vz-jhb-3:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.29T 481G 192K /rpool
rpool/ROOT 1.15G 481G 192K /rpool/ROOT
rpool/ROOT/pve-1 1.15G 481G 1.15G /
rpool/data 1.28T 481G 192K /rpool/data
rpool/data/vm-100-disk-1 84.0G 481G 84.0G -
rpool/data/vm-100-disk-2 31.3G 481G 31.3G -
rpool/data/vm-102-disk-1 178G 481G 178G -
rpool/data/vm-102-disk-2 93.8G 481G 93.8G -
rpool/data/vm-102-disk-3 586G 481G 586G -
rpool/data/vm-102-disk-4 12.3G 481G 12.3G -
rpool/data/vm-104-disk-1 137G 481G 137G -
rpool/data/vm-104-disk-2 116G 481G 116G -
rpool/data/vm-111-disk-1 13.6G 481G 13.6G -
rpool/data/vm-112-disk-1 58.8G 481G 58.8G -
rpool/swap 8.50G 486G 3.57G -
 
Thanks

Does ZFS not just take space for every VM? Also when I set it up with proxmox ISO I used 6 x 500GB disks and it gave me a 1.74 GB rpool, So I assume it took the 260GB for some maintenance or other reason?

This could be due to GB vs. GiB or zRAID metadata.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!