Hi all,
I have some questions about the storgae model, and what the best practices are to deal with this. I hope this thread may become a reference for others.
So, what I have:
8 x 2TB disk and a nvme drive (a slice for the OS, and a slice as cache.)
I configured a zpool:
So, using raidz2, I lose 3 disks of capacity. Fine. so, I should get 5 x 2 TB = 10TB of storage.
ashift = 12.
And yes, I get that.
But... My disk space is being used up more quickly than I anticipated on.
I have these vm's, using these disks:
However, when I do
I see the volumes are much larger than allocated in storage.
My questions:
1) how come? Why are the volumes bigger? When I count the TB I allocated, it's around 5TB. However, all my space is used up (just short of 10TB)
I read some threads stating this has something to do with parity blocks ZFS stores. But that seems odd to me, since that is why I lose the 3 disks in the first place, right?
2) Could it have something to do with the filesystems being used in the guests? VM 101 is a openmediavault server having 3 data disks (virtual disks, stored on the zfs). They are managed via LVM. Same story for VM 107, but this is a single disk managed in the guest with LVM. Both have on top an ext4 filesystem. Could this be the culprit?
3) I know it isn't 'best practice' to have a storage server with virtualized volumes. But I have no separate storage server available. So I really need the stora server (for file sharing) to be virtualized to. Anything to recommend how to manage / use storage?
4) Could it be the file system? VM 110 is a windows 10 VM, with an allocated disk of 200GB. However, 310GB is reported / used. or has the NTFS filesystem the same 'assumptions / properties' towards the block device below as LVM / ext4 has?
if anyone could shed some light in this darkness, I'd be very grateful!
If any furter information about my setup would be helpful, please ask.
thanks in advance!
I have some questions about the storgae model, and what the best practices are to deal with this. I hope this thread may become a reference for others.
So, what I have:
8 x 2TB disk and a nvme drive (a slice for the OS, and a slice as cache.)
I configured a zpool:
Code:
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
cache
nvme0n1p4 ONLINE 0 0 0
So, using raidz2, I lose 3 disks of capacity. Fine. so, I should get 5 x 2 TB = 10TB of storage.
ashift = 12.
And yes, I get that.
Code:
root@pm13:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 9.33T 675G 222K /tank
But... My disk space is being used up more quickly than I anticipated on.
I have these vm's, using these disks:
Code:
root@pm13:~# pvesm list tank
tank:vm-100-disk-1 raw 17179869184 100
tank:vm-101-disk-1 raw 17179869184 101
tank:vm-101-disk-2 raw 1030792151040 101
tank:vm-101-disk-3 raw 1030792151040 101
tank:vm-101-disk-4 raw 1030792151040 101
tank:vm-102-disk-1 raw 17179869184 102
tank:vm-103-disk-1 raw 8589934592 103
tank:vm-104-disk-1 raw 17179869184 104
tank:vm-105-disk-1 raw 17179869184 105
tank:vm-106-disk-1 raw 17179869184 106
tank:vm-107-disk-1 raw 34359738368 107
tank:vm-107-disk-2 raw 2199023255552 107
tank:vm-110-disk-1 raw 214748364800 110
However, when I do
Code:
root@pm13:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
backup 2.41T 4.61T 2.41T /backup
tank 9.33T 675G 222K /tank
tank/vm-100-disk-1 27.6G 675G 27.6G -
tank/vm-101-disk-1 4.05G 675G 4.05G -
tank/vm-101-disk-2 1.99T 675G 1.99T -
tank/vm-101-disk-3 1.98T 675G 1.98T -
tank/vm-101-disk-4 1.35T 675G 1.35T -
tank/vm-102-disk-1 15.7G 675G 15.7G -
tank/vm-103-disk-1 3.34G 675G 3.34G -
tank/vm-104-disk-1 34.7G 675G 34.7G -
tank/vm-105-disk-1 2.44G 675G 2.44G -
tank/vm-106-disk-1 13.3G 675G 13.3G -
tank/vm-107-disk-1 50.7G 675G 50.7G -
tank/vm-107-disk-2 3.55T 675G 3.55T -
tank/vm-110-disk-1 310G 675G 310G -
I see the volumes are much larger than allocated in storage.
My questions:
1) how come? Why are the volumes bigger? When I count the TB I allocated, it's around 5TB. However, all my space is used up (just short of 10TB)
I read some threads stating this has something to do with parity blocks ZFS stores. But that seems odd to me, since that is why I lose the 3 disks in the first place, right?
2) Could it have something to do with the filesystems being used in the guests? VM 101 is a openmediavault server having 3 data disks (virtual disks, stored on the zfs). They are managed via LVM. Same story for VM 107, but this is a single disk managed in the guest with LVM. Both have on top an ext4 filesystem. Could this be the culprit?
3) I know it isn't 'best practice' to have a storage server with virtualized volumes. But I have no separate storage server available. So I really need the stora server (for file sharing) to be virtualized to. Anything to recommend how to manage / use storage?
4) Could it be the file system? VM 110 is a windows 10 VM, with an allocated disk of 200GB. However, 310GB is reported / used. or has the NTFS filesystem the same 'assumptions / properties' towards the block device below as LVM / ext4 has?
if anyone could shed some light in this darkness, I'd be very grateful!
If any furter information about my setup would be helpful, please ask.
thanks in advance!