I have a fast 2TB NVME SSD and a slow 4TB SATA SSD inside a RAID0 zfs zpool. I like zfs (over lvm which I had before) for its versatility in managing storage (e.g. one pool to use, including the pve root), however I'm running into the issue that my mixed hardware slows down. With LVM, I could make an lvm-thin pool that only used the fast disk (if created first), and store static data on a second lvm-thin pool on the remainder of the fast disk + slow disk. It turns out zfs filled up my fast disk first, and now I have only the slow disk left:
which degrades my performance:
I see four options, none of which are ideal:
1. Accept this and move on
2. Re-create a setup with 2 zpools, one on the fast disk and one on the slow disk --> can I use a fraction of the fast disk (e.g. 500GB) for running virtual machines and use the remainder for data? I.e. can I create a zpool of partitions?
3. Somehow balance writing 2:1 between these asymmetric disks --> this will still mean that 66% of the data will be written to the slow disk, so my performance will
4. Manually move the static data off the fast disk to the slow disk (this data is hardly changing) --> is this possible? How?
Ideally I'd like to assign a zfs dataset to have a certain affinity with a vdev, e.g.
Code:
zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 5.44T 3.20T 2.24T - - 12% 58% 1.00x ONLINE -
nvme-eui.0025385a11b2xxxx-part3 1.82T 1.72T 95.0G - - 35% 94.9% - ONLINE
ata-Samsung_SSD_860_EVO_4TB_S45JNB0M500432F-part3 3.64T 1.48T 2.14T - - 1% 40.8% - ONLINE
which degrades my performance:
Code:
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --numjobs=1 --size=2g --iodepth=1 --runtime=30 --time_based --end_fsync=1
pve host zfs raid0
WRITE: bw=53.1MiB/s (55.7MB/s), 53.1MiB/s-53.1MiB/s (55.7MB/s-55.7MB/s), io=1605MiB (1683MB), run=30229-30229msec
pve host lvm:
WRITE: bw=220MiB/s (231MB/s), 220MiB/s-220MiB/s (231MB/s-231MB/s), io=6690MiB (7015MB), run=30431-30431msec
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=64k --size=128m --numjobs=16 --iodepth=16 --runtime=30 --time_based --end_fsync=1
pve host zfs raid0
WRITE: bw=741MiB/s (777MB/s), 16.8MiB/s-265MiB/s (17.6MB/s-278MB/s), io=22.6GiB (24.3GB), run=31220-31226msec
pve host lvm:
WRITE: bw=2429MiB/s (2547MB/s), 140MiB/s-164MiB/s (147MB/s-172MB/s), io=72.7GiB (78.0GB), run=30127-30641msec
I see four options, none of which are ideal:
1. Accept this and move on
2. Re-create a setup with 2 zpools, one on the fast disk and one on the slow disk --> can I use a fraction of the fast disk (e.g. 500GB) for running virtual machines and use the remainder for data? I.e. can I create a zpool of partitions?
3. Somehow balance writing 2:1 between these asymmetric disks --> this will still mean that 66% of the data will be written to the slow disk, so my performance will
4. Manually move the static data off the fast disk to the slow disk (this data is hardly changing) --> is this possible? How?
Ideally I'd like to assign a zfs dataset to have a certain affinity with a vdev, e.g.
zfs create rpool/backups/data-tim -affinity /dev/sda
such that it has a preferred vdev to write to. Is this possible? Any other suggestions besides the above? Thanks 