[SOLVED] zfs storage disk size 0 B

Jero

Member
May 20, 2016
33
2
8
42
hi Guys,

I think i found a little GUI bug.

When i first created my ZFS datastore "wdpool" the layout was like:

> wdpool/vm-110-disk-1

I deleted(removed) the storage in the proxmox gui and moved (zfs send-received) my vm datasets like:

> wdpool/vm-disks/vm-110-disk-1

and imported the wdpool in the GUI with path wdpool/vm-disks.

VM's are running like expected but the reported Size in the GUI is 0 B.

Greetz,

(latest version)
 
Last edited:
please post the output of "pveversion -v", the output of "zfs list -t all -r -o name,used,available,referenced,quota,refquota,mountpoint wdpool" and your storage configuration ("/etc/pve/storage.cfg")
 
pveversion -v
-----------------
proxmox-ve: 4.3-70 (running kernel: 4.4.21-1-pve)
pve-manager: 4.3-7 (running version: 4.3-7/db02a4de)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.13-1-pve: 4.4.13-56
pve-kernel-4.4.8-1-pve: 4.4.8-52
pve-kernel-4.4.21-1-pve: 4.4.21-70
pve-kernel-4.4.15-1-pve: 4.4.15-60
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.10-1-pve: 4.4.10-54
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-92
pve-firmware: 1.1-10
libpve-common-perl: 4.0-76
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-67
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.3-12
pve-qemu-kvm: 2.7.0-4
pve-container: 1.0-78
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve12~bpo80


zfs list -t all -r -o name,used,available,referenced,quota,refquota,mountpoint wdpool
-----------------------------------------------------------------------------------------------------------
NAME USED AVAIL REFER QUOTA REFQUOTA MOUNTPOINT
wdpool 76.1G 373G 1.63G none none /wdpool
wdpool/vm-disks 74.5G 373G 20K none none /wdpool/vm-disks
wdpool/vm-disks/subvol-115-disk-1 1022M 373G 1022M none none /wdpool/vm-disks/subvol-115-disk-1
wdpool/vm-disks/subvol-117-disk-1 845M 373G 845M none none /wdpool/vm-disks/subvol-117-disk-1
wdpool/vm-disks/subvol-118-disk-1 894M 373G 894M none none /wdpool/vm-disks/subvol-118-disk-1
wdpool/vm-disks/subvol-120-disk-1 1.56G 373G 1.56G none none /wdpool/vm-disks/subvol-120-disk-1
wdpool/vm-disks/subvol-121-disk-1 795M 373G 795M none none /wdpool/vm-disks/subvol-121-disk-1
wdpool/vm-disks/subvol-123-disk-1 1.03G 373G 1.03G none none /wdpool/vm-disks/subvol-123-disk-1
wdpool/vm-disks/vm-100-disk-1 1.80G 373G 1.80G - - -
wdpool/vm-disks/vm-110-disk-1 14.4G 373G 14.4G - - -
wdpool/vm-disks/vm-112-disk-1 37.9G 373G 37.9G - - -
wdpool/vm-disks/vm-122-disk-1 14.2G 373G 14.2G - - -

cat /etc/pve/storage.cfg
------------------------------
zfspool: wdpool
pool wdpool/vm-disks
sparse 0
content rootdir,images
 
you did not send the (ref)quota property, so those subvols don't have a size limit (which is what we use as "disk size"). you can simply set it again with "zfs set"
 
oh! ok, do i need to restart reimport something? Because i just did:
"zfs set quota=4G wdpool/vm-disks/subvol-120-disk-1"
But still 0 b in the gui.

Thanks for the great support! I realy love what u guys have made here :)

EDIT: Nevermind, it was the refqouta ;-)!
Thanks again
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!