Hi there,
So in building my home lab I've been generally pleased with the performance of windows and linux VMs despite using consumer grade nvme/ssd/
Most of my hosts contain a single SATA SSD and NVME of the same size. The plan was to have these setup as a mirror, knowing the SSD will drag the performance of the NVME back a fair amount.
I have noticed however, with both esxi and Nutanix nested on pve, that the performance is abysmal. With neither able to load their GUIs properly, seemingly taking a very long time to initialise. Moving from ZFS to LVM-thin (ext4 root) ... the issues disappear, with both loading within reasonable timeframes. In my logic, I'm thinking having VMFS or ADSF on top of ZFS on top of consumer SSDs is just amplifying writes chronically.
I'm using these in a lab context to have a play with iac and automation across hypervisors.
So this leaves me with a dilemma. I love the ability to replicate natively with pve using ZFS and migrations between local zfs stores is lightning fast compared to LVM which seems to need to sync the entire disk across. It also seems like under regular loads, ZFS has the edge for speed also. However it would seem that if I want to run these nested hypervisors I will need a change of plans.
Option 1: have one single LVM-thin disk for the nested hypervisors and a single ZFS disk for anything else. This provides no real redundancy with only the workloads living on ZFS capable of replication and HA.
Option 2: Run MDAMD + lvm for redundancy and forget about HA completely. Will need to rely on backups/restore
Option 3: Use a USB SSD for LVM-thin for the hypervisors (using backup for protection) and run ZFS mirrored root for everything else.
I don't really need HA on the nested hypervisors since they are purely sandpits. HA would be useful for the other workloads running on the cluster (TrueNas, several docker hosts, cloud management portal).
Anyone care to comment on those three options and help me rationalise them?
Thanks.
So in building my home lab I've been generally pleased with the performance of windows and linux VMs despite using consumer grade nvme/ssd/
Most of my hosts contain a single SATA SSD and NVME of the same size. The plan was to have these setup as a mirror, knowing the SSD will drag the performance of the NVME back a fair amount.
I have noticed however, with both esxi and Nutanix nested on pve, that the performance is abysmal. With neither able to load their GUIs properly, seemingly taking a very long time to initialise. Moving from ZFS to LVM-thin (ext4 root) ... the issues disappear, with both loading within reasonable timeframes. In my logic, I'm thinking having VMFS or ADSF on top of ZFS on top of consumer SSDs is just amplifying writes chronically.
I'm using these in a lab context to have a play with iac and automation across hypervisors.
So this leaves me with a dilemma. I love the ability to replicate natively with pve using ZFS and migrations between local zfs stores is lightning fast compared to LVM which seems to need to sync the entire disk across. It also seems like under regular loads, ZFS has the edge for speed also. However it would seem that if I want to run these nested hypervisors I will need a change of plans.
Option 1: have one single LVM-thin disk for the nested hypervisors and a single ZFS disk for anything else. This provides no real redundancy with only the workloads living on ZFS capable of replication and HA.
Option 2: Run MDAMD + lvm for redundancy and forget about HA completely. Will need to rely on backups/restore
Option 3: Use a USB SSD for LVM-thin for the hypervisors (using backup for protection) and run ZFS mirrored root for everything else.
I don't really need HA on the nested hypervisors since they are purely sandpits. HA would be useful for the other workloads running on the cluster (TrueNas, several docker hosts, cloud management portal).
Anyone care to comment on those three options and help me rationalise them?
Thanks.