Hello,
I'm on similar (big, for large data size) problem.
I have a two nodes PVE 5.3 with 6x12TB for each node with zfs raidz2 zpool and storage replication.
I have installed following all wiki suggestions (so I agree that is a wiki missing infos) and moving VMs from old server of about 20TB on LVM iSCSI SAN. Now I have doubled size and I'm at limit (also having slow performances, but this is probably a consequence).
My current status is:
After a lot, I have understood that the problem is blocksize.
Questions:
1) Mostly VMs are windows server. Is it better to use a 4k block size, as suggested from proxmox support, or, as them are block volumes is could be better a 32k like your last example?
2) May I move VMs between nodes so users can work on and rebuild one by one? If this is possible, could you kindly help give me the sequence I have to do?
3) I have also a QNAP NAS with large space to do backups, but I have fear for timing on make all the job and data safety. Probably because of wrong blocksize performance were poor and I needed more than a week to move all datas, also if I'm in a 10Gbit lan.
4) Aside problem that I faced now, I don't know if related with other problem, was a wrong disk size on one disk of a VM that instead to show as 4TB is displayed in bytes ad is different from original size. Have you had similar problem with zfs?
Migration from old server was done making vzdump and restore on new servers.
Thank you in advance
Francesco
I'm on similar (big, for large data size) problem.
I have a two nodes PVE 5.3 with 6x12TB for each node with zfs raidz2 zpool and storage replication.
I have installed following all wiki suggestions (so I agree that is a wiki missing infos) and moving VMs from old server of about 20TB on LVM iSCSI SAN. Now I have doubled size and I'm at limit (also having slow performances, but this is probably a consequence).
My current status is:
Code:
# zpool status
pool: zfspool1
state: ONLINE
scan: scrub repaired 0B in 48h7m with 0 errors on Mon Apr 15 09:42:34 2019
config:
NAME STATE READ WRITE CKSUM
zfspool1 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
logs
sdb1 ONLINE 0 0 0
cache
sdb2 ONLINE 0 0 0
errors: No known data errors
root@nodo1-ced:~# zfs list -o name,avail,used,refer,lused,lrefer,mountpoint,compress,compressratio
NAME AVAIL USED REFER LUSED LREFER MOUNTPOINT COMPRESS RATIO
zfspool1 11.7T 29.6T 192K 13.0T 40K /zfspool1 lz4 1.00x
zfspool1/test16k 11.7T 5.08G 112K 26K 26K - lz4 1.00x
zfspool1/vm-100-disk-0 11.7T 195G 194G 101G 100G - lz4 1.03x
zfspool1/vm-100-disk-1 11.7T 5.98T 5.98T 2.99T 2.99T - lz4 1.00x
zfspool1/vm-100-disk-2 11.7T 8.43T 8.43T 4.24T 4.24T - lz4 1.00x
zfspool1/vm-100-disk-3 11.7T 7.33T 7.33T 3.67T 3.67T - lz4 1.00x
zfspool1/vm-101-disk-0 11.8T 285G 181G 91.2G 91.2G - lz4 1.00x
zfspool1/vm-101-disk-1 11.8T 61.9G 128K 34K 34K - lz4 1.00x
zfspool1/vm-108-disk-0 11.9T 423G 268G 139G 139G - lz4 1.03x
zfspool1/vm-108-disk-1 12.8T 1.56T 541G 277G 277G - lz4 1.02x
zfspool1/vm-109-disk-0 11.7T 34.5G 34.5G 19.1G 19.1G - lz4 1.10x
zfspool1/vm-110-disk-0 11.9T 383G 228G 119G 119G - lz4 1.04x
zfspool1/vm-110-disk-1 13.8T 4.70T 2.63T 1.32T 1.32T - lz4 1.00x
zfspool1/vm-112-disk-0 11.9T 221G 14.7G 7.61G 7.61G - lz4 1.03x
After a lot, I have understood that the problem is blocksize.
Questions:
1) Mostly VMs are windows server. Is it better to use a 4k block size, as suggested from proxmox support, or, as them are block volumes is could be better a 32k like your last example?
2) May I move VMs between nodes so users can work on and rebuild one by one? If this is possible, could you kindly help give me the sequence I have to do?
3) I have also a QNAP NAS with large space to do backups, but I have fear for timing on make all the job and data safety. Probably because of wrong blocksize performance were poor and I needed more than a week to move all datas, also if I'm in a 10Gbit lan.
4) Aside problem that I faced now, I don't know if related with other problem, was a wrong disk size on one disk of a VM that instead to show as 4TB is displayed in bytes ad is different from original size. Have you had similar problem with zfs?
Migration from old server was done making vzdump and restore on new servers.
Thank you in advance
Francesco