zfs storage woes

cheeki-breeki

New Member
Oct 20, 2023
18
0
1
ran out of storage, so first: how do i reset an io-error caused by this, on a VM?

i cleared some space, so now i got;
Node->Disks->ZFS, shows 48TB pool, 1.52TB free
zpool list shows it at 43.7TB, 1.39TB free
zfs list about 28TB used, 800GB free

so where is the missing 20TB storage?
i'm unfortunately a ZFS noob, so i know what i've needed to setup the pool.
is it because of overhead?
will switching from directory/dataset to zvol solve it?
should i be using LVM/LVM-thin for disk images?

zfs list shows 3TB used directly by zpool, but all data ought to be in the zvols? so what is this usage?
 
Last edited:
ran out of storage, so first: how do i reset an io-error caused by this, on a VM?
Restore from backup and re-run the action that failed. How can we know what the VM was doing at the time.

so where is the missing 20TB storage?
It's probably padding (and maybe a little ZFS meta-data overhead) due to number of drives being a poor match for the block size. Assuming that you used RAIDz1 (or RAIDz2 or RAIDz3), which is also a bad choice for VMs. This is not Proxmox specific but RAIDz1 block padding has been discussed on this forum several times before.
 
Last edited:
Restore from backup and re-run the action that failed. How can we know what the VM was doing at the time.
was transfering files.
no way to resume the frozen VMs?

It's probably padding (and maybe a little ZFS meta-data overhead) due to number of drives being a poor match for the block size. Assuming that you used RAIDz1 (or RAIDz2 or RAIDz3), which is also a bad choice for VMs. This is not Proxmox specific but RAIDz1 block padding has been discussed on this forum several times before.
it's for cold storage.
50% overhead? only from block size?
could you link the posts?

will switching from directory/dataset to zvol solve it?
should i be using LVM/LVM-thin for disk images?
zfs list shows 3TB used directly by zpool, but all data ought to be in the zvols? so what is this usage?
 
no way to resume the frozen VMs?
I did not realize your VMs were still paused. Does the Resume button not work? I have no experience with this, sorry.
50% overhead? only from block size?
It has happened before but feel free to not believe a random stranger on the internet.
could you link the posts?
Maybe do some research into ZFS, as this is not specific to Proxmox.
 
Last edited:
I did not realize your VMs were still paused. Does the Resume button not work? I have no experience with this, sorry.
not seeing Resume anywhere. still running..
It has happened before but feel free to not believe a random stranger on the internet.

Maybe do some research into ZFS, as this is not specific to Proxmox.
love "google it" replies.
same problem with vm disk image on zvol?
will switching from directory/dataset to zvol solve it?
should i be using LVM/LVM-thin for disk images?

zfs list shows 3TB used directly by zpool, but all data ought to be in the zvols? so what is this usage?
not seeing volblocksize, only recordsize
how to test performance?
would it be better to passthrough a zvol?
 
bump, in hope that somebody can tell me why i'm losing 20% of what ought to be usable to padding (NOT parity)

lvs -a, vgs -a, and zfs get volblocksize returns no output

which volblocksize should i use when i mainly have VM disks sized in TBs?
 
Last edited:
That''s strange. What is the output of zpool status? Can you please show us information about your problematic pool, like what configuration and how many drives or simple the output of zpool status ${YOUR_POOL_NAME}?
nothing interesting in zpool status, just 6x 8TB drives. so should be 32TB usable.
zpool list shows size of 43,7TB, 1.38TB free
zfs list shows 28.2TB used. (3TB directly under zpool(?)), ~800GB free
one zvol at 20TB, is shown at 22TB in proxmox gui..
which volblocksize should i use when i mainly have VM disks sized in TBs?
 
Last edited:
That's not how ZFS works.
6x8-2x8=32
then comes padding.
depends how you define usable.
The output zpool status ${YOUR_POOL_NAME} would be helpful for other people to help you as well. But I could be wrong of course. Best of luck finding the issue.
what kind of information are you looking for?
there's no errors, and it's just a list of the 6 disks and their UUID's?
which volblocksize should i use when i mainly have VM disks sized in TBs?
 
Last edited:
what kind of information are you looking for?
The actual output of the command (in CODE-tags for readability), but I'm done with this begging for information and doing the forum searching for you. What have you learned or read about RAIDz1 in the meantime? I do hope someone else can help you.
 
Last edited: