[SOLVED] Questions about space usage ZFS

killmasta93

Renowned Member
Aug 13, 2017
973
58
68
31
Hi
I was wondering if someone could shed some light on the issue im having,
Currently running proxmox 6.2.4 using ZFS on a server with raid 10,
In a VM i have the OS with normal ext4 but i created another disk inside of the vm and in the VM installed ZFS for data (nextcloud docker)
the issue im having is that on proxmox it shows that its using more then it really is
I was reading a bit and what i did was on proxmox the virtual disk gave it 64k instead of 8k
and in the vm i created the disk
Code:
zpool create -f -o ashift=13 data /dev/sdb

but in the vm it shows disk space 132M used

Code:
data 132M 193G 130M /data

and on proxmox it shows that its using 540M used which is way to much

Code:
rpool/data/vm-106-disk-3 540M 4.27T 540M -

this is the info of proxmox virtual disk
Code:
NAME PROPERTY VALUE SOURCE
rpool/data/vm-106-disk-3  type                  volume                 -
rpool/data/vm-106-disk-3  creation              Mon Mar 13 17:10 2023  -
rpool/data/vm-106-disk-3  used                  538M                   -
rpool/data/vm-106-disk-3  available             4.27T                  -
rpool/data/vm-106-disk-3  referenced            538M                   -
rpool/data/vm-106-disk-3  compressratio         4.68x                  -
rpool/data/vm-106-disk-3  reservation           none                   default
rpool/data/vm-106-disk-3  volsize               200G                   local
rpool/data/vm-106-disk-3  volblocksize          64K                    -
rpool/data/vm-106-disk-3  checksum              on                     default
rpool/data/vm-106-disk-3  compression           on                     inherited from rpool
rpool/data/vm-106-disk-3  readonly              off                    default
rpool/data/vm-106-disk-3  createtxg             14384795               -
rpool/data/vm-106-disk-3  copies                1                      default
rpool/data/vm-106-disk-3  refreservation        none                   default
rpool/data/vm-106-disk-3  guid                  7529856589884445464    -
rpool/data/vm-106-disk-3  primarycache          all                    default
rpool/data/vm-106-disk-3  secondarycache        all                    default
rpool/data/vm-106-disk-3  usedbysnapshots       0B                     -
rpool/data/vm-106-disk-3  usedbydataset         538M                   -
rpool/data/vm-106-disk-3  usedbychildren        0B                     -
rpool/data/vm-106-disk-3  usedbyrefreservation  0B                     -
rpool/data/vm-106-disk-3  logbias               latency                default
rpool/data/vm-106-disk-3  objsetid              108393                 -
rpool/data/vm-106-disk-3  dedup                 off                    default
rpool/data/vm-106-disk-3  mlslabel              none                   default
rpool/data/vm-106-disk-3  sync                  disabled               inherited from rpool
rpool/data/vm-106-disk-3  refcompressratio      4.68x                  -
rpool/data/vm-106-disk-3  written               538M                   -
rpool/data/vm-106-disk-3  logicalused           2.45G                  -
rpool/data/vm-106-disk-3  logicalreferenced     2.45G                  -
rpool/data/vm-106-disk-3  volmode               default                default
rpool/data/vm-106-disk-3  snapshot_limit        none                   default
rpool/data/vm-106-disk-3  snapshot_count        none                   default
rpool/data/vm-106-disk-3  snapdev               hidden                 default
rpool/data/vm-106-disk-3  context               none                   default
rpool/data/vm-106-disk-3  fscontext             none                   default
rpool/data/vm-106-disk-3  defcontext            none                   default
rpool/data/vm-106-disk-3  rootcontext           none                   default
rpool/data/vm-106-disk-3  redundant_metadata    all                    default
rpool/data/vm-106-disk-3  encryption            off                    default
rpool/data/vm-106-disk-3  keylocation           none                   default
rpool/data/vm-106-disk-3  keyformat             none                   default
rpool/data/vm-106-disk-3  pbkdf2iters           0                      default


and the info inside of the VM
Code:
data type filesystem -
data  creation              Mon Mar 13 17:31 2023  -
data  used                  132M                   -
data  available             193G                   -
data  referenced            130M                   -
data  compressratio         1.06x                  -
data  mounted               yes                    -
data  quota                 none                   default
data  reservation           none                   default
data  recordsize            128K                   default
data  mountpoint            /data              default
data  sharenfs              off                    default
data  checksum              on                     default
data  compression           on                     local
data  atime                 on                     default
data  devices               on                     default
data  exec                  on                     default
data  setuid                on                     default
data  readonly              off                    default
data  zoned                 off                    default
data  snapdir               hidden                 default
data  aclmode               discard                default
data  aclinherit            passthrough            local
data  createtxg             1                      -
data  canmount              on                     default
data  xattr                 sa                     local
data  copies                1                      default
data  version               5                      -
data  utf8only              off                    -
data  normalization         none                   -
data  casesensitivity       sensitive              -
data  vscan                 off                    default
data  nbmand                off                    default
data  sharesmb              off                    default
data  refquota              none                   default
data  refreservation        none                   default
data  guid                  6915864653822050311    -
data  primarycache          all                    default
data  secondarycache        all                    default
data  usedbysnapshots       0B                     -
data  usedbydataset         130M                   -
data  usedbychildren        2.34M                  -
data  usedbyrefreservation  0B                     -
data  logbias               latency                default
data  objsetid              54                     -
data  dedup                 off                    default
data  mlslabel              none                   default
data  sync                  standard               default
data  dnodesize             legacy                 default
data  refcompressratio      1.06x                  -
data  written               130M                   -
data  logicalused           137M                   -
data  logicalreferenced     136M                   -
data  volmode               default                default
data  filesystem_limit      none                   default
data  snapshot_limit        none                   default
data  filesystem_count      none                   default
data  snapshot_count        none                   default
data  snapdev               hidden                 default
data  acltype               posix                  local
data  context               none                   default
data  fscontext             none                   default
data  defcontext            none                   default
data  rootcontext           none                   default
data  relatime              off                    default
data  redundant_metadata    all                    default
data  overlay               on                     default
data  encryption            off                    default
data  keylocation           none                   default
data  keyformat             none                   default
data  pbkdf2iters           0                      default
data  special_small_blocks  0                      default


Thank you
 
ZFS on top of ZFS is a bad idea, as ZFS got massive overhead and this overhead will multiply and not just add up. Do you really need ZFS inside the VM? For stuff like bit rot protection, blocklevel compression, redundancy, deduplication an ext4/xfs on top of ZFS would be totally fine.
Currently running proxmox 6.2.4 using ZFS on a server with raid 10,
You should consider upgrading. Both Debian 10 and PVE 6 are end-of-life so you are running a server that is vulnerable as it isn't receiving security patches since last year.
How many disks that raid10 consists of? You only need a 64K volblocksize when using ashift=13 and 16 or more disks.
 
  • Like
Reactions: Enlightend
Hi there, thanks for the reply, i was checking and it seems that i did not have the discard option after that its grabbing the correct info
Currently using 8 disks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!