Hi Everyone
I'm new to pve and have been fiddling around for a few weeks. I come from an ESXi background and I suppose there is a lot more to consider in designing a cluster given the options around local/network and distributed storage. My main reason for moving to pve is I'm interested in running zfs mirrors to get disk redundancy in case of failures without needing to get into complex backup/recovery/ha. Remembering this is a home environment not a business product environment.
My cluster consists of 8 heterogeneous boxes, all consumer grade.
Box 1 is the main box - will run virtualised truenas using a sas card and pci passthrough as well as a media server
5900x w/64gb ECC
10 spinning disks (pass through data pool xfs)
2 nvme (pve install zfs mirror)
4 sata ssd (pass through sratch pool zfs)
1gbE nic
Wifi nic for management failover
The remaining 7 servers each contain intel CPUs of various generations and 2 nvme/ssd drives, 1TB of storage space each and 12gb nics. These will run a bunch of lab environments, nested ESXi/hyperv/openstack/nutanix and some docker/k8s again for learning/testing/demo
A few questions I have if someone could please help me understand:
Thanks in advance!!
I'm new to pve and have been fiddling around for a few weeks. I come from an ESXi background and I suppose there is a lot more to consider in designing a cluster given the options around local/network and distributed storage. My main reason for moving to pve is I'm interested in running zfs mirrors to get disk redundancy in case of failures without needing to get into complex backup/recovery/ha. Remembering this is a home environment not a business product environment.
My cluster consists of 8 heterogeneous boxes, all consumer grade.
Box 1 is the main box - will run virtualised truenas using a sas card and pci passthrough as well as a media server
5900x w/64gb ECC
10 spinning disks (pass through data pool xfs)
2 nvme (pve install zfs mirror)
4 sata ssd (pass through sratch pool zfs)
1gbE nic
Wifi nic for management failover
The remaining 7 servers each contain intel CPUs of various generations and 2 nvme/ssd drives, 1TB of storage space each and 12gb nics. These will run a bunch of lab environments, nested ESXi/hyperv/openstack/nutanix and some docker/k8s again for learning/testing/demo
A few questions I have if someone could please help me understand:
- I keep reading never to run consumer grade ssds with ZFS due to performance and poor lifespans. Taking performance off the table, should I be avoiding the use of ZFS mirrors for the pve local disks?
- What about the scratch pool which is used as a
- transcoding/downloading cache
- VM disk share
- What about the scratch pool which is used as a
- If sharing ZFS truenas shares over NFS for VM disks, should those disks use raw or qcow2? I have read that qcow on ZFS is a bad idea due to COW on COW, does that change if NFS is in the mix? Using raw on NFS regardless of the backend seems to constrain the use of snapshots.
- Can pve be installed onto a usb stick, leaving the local drives as zfs mirrors?
- usb stick failure shouldn't be any more complex than reinstalling pve onto a new usb stick and importing the zfs mirror.
- Is ceph worth thinking about in such an environment rather than using ZFS local disks?
- I've read ceph on zfs should be avoided
- I'm not sure how 1GbE links will hold up with runing a distributed storage system although I have run Nutanix on 1GbE switches in small 3 node environments
Thanks in advance!!