Hi all,
I'm getting up and running on a "new" server to replace an array of MiniPCs and SBCs running the varying services and products I use in my quest to self-host.
At this stage I have the hardware, just testing different ways of setting it up to see what I get, how I can use it and using that to get used to Proxmox and teach myself some LXC tricks etc, before doing a final setup and moving everything into the one machine.
I'm wondering if there is a way to direct the ZFS pool layout during the install beyond just "zfs (RAID10/Mirror/RAID-Zx)" and selecting the total number of disks?
The drives I have for my boot disk and VM storage are 6x500GB Kingston A2000s, installed in pairs on PCIe > NVMe cards, using bifurcation on my Dell's x8 slots (so there is no controller nonsense to get in ZFS's way).
Originally I was going to install these in a RAID-Z2, so if a PCIe card failed, the system would stay running in a degraded state, allowing me to resolve the issue without losing access to whatever I'm running on it.
As I have studied and experimented more, however, I've learnt that in RAID-Z your IOPS tend to be help to a single drive's performance, which is fine for stuff like media storage for example, but less than ideal for VMs and databases.
In the endless compromise between Capacity, Redundancy and Performance, I'm pretty sure that RAID10 would be the sweet spot with what I have, and to ensure the right redundancy to remove the PCIe cards as single points of failure, I'd need to be selective about which SSD mirrors which other SSD. Is there a way to do that in the installer? I had through perhaps in debug mode it might drop to a shell at the disk setup step, or have an added button under advance to do so, but didn't see anything.
If not an option for the installer, if I partition up the disks right (so 1mb partition labelled as legacy bios, one 512mb partition labeled ESP and one large partition for the rest of the drive on each), then set up a empty zfs pool called 'rpool' across the drives in the right topology for what I'm after, will the proxmox installer see the free pool and offer to use it?
And yes I know I'm lacking enterprise SSDs, for anyone worried. For what I'm doing, and the capacities I need for it, and the budget I have, these drive's price point plus doing lots of regular backups is the way to go for now. Hopefully by the time I'm looking for more capacity I'll be able to find some enterprise drives second hand at a price point I can afford (or even better, I have a higher income by then! )
Thanks for anyone who has the time to chat! If you have any critique or advice on drive layout, or pool/dataset options I should be using, also welcome!
I'll be running email, contacts syncing, a influxDB (v1) instance, HomeAssistant, Plex, and probably add a federated social network node in at some point, though not really thought about it other than "hey, I should do that sometime"
I'm getting up and running on a "new" server to replace an array of MiniPCs and SBCs running the varying services and products I use in my quest to self-host.
At this stage I have the hardware, just testing different ways of setting it up to see what I get, how I can use it and using that to get used to Proxmox and teach myself some LXC tricks etc, before doing a final setup and moving everything into the one machine.
I'm wondering if there is a way to direct the ZFS pool layout during the install beyond just "zfs (RAID10/Mirror/RAID-Zx)" and selecting the total number of disks?
The drives I have for my boot disk and VM storage are 6x500GB Kingston A2000s, installed in pairs on PCIe > NVMe cards, using bifurcation on my Dell's x8 slots (so there is no controller nonsense to get in ZFS's way).
Originally I was going to install these in a RAID-Z2, so if a PCIe card failed, the system would stay running in a degraded state, allowing me to resolve the issue without losing access to whatever I'm running on it.
As I have studied and experimented more, however, I've learnt that in RAID-Z your IOPS tend to be help to a single drive's performance, which is fine for stuff like media storage for example, but less than ideal for VMs and databases.
In the endless compromise between Capacity, Redundancy and Performance, I'm pretty sure that RAID10 would be the sweet spot with what I have, and to ensure the right redundancy to remove the PCIe cards as single points of failure, I'd need to be selective about which SSD mirrors which other SSD. Is there a way to do that in the installer? I had through perhaps in debug mode it might drop to a shell at the disk setup step, or have an added button under advance to do so, but didn't see anything.
If not an option for the installer, if I partition up the disks right (so 1mb partition labelled as legacy bios, one 512mb partition labeled ESP and one large partition for the rest of the drive on each), then set up a empty zfs pool called 'rpool' across the drives in the right topology for what I'm after, will the proxmox installer see the free pool and offer to use it?
And yes I know I'm lacking enterprise SSDs, for anyone worried. For what I'm doing, and the capacities I need for it, and the budget I have, these drive's price point plus doing lots of regular backups is the way to go for now. Hopefully by the time I'm looking for more capacity I'll be able to find some enterprise drives second hand at a price point I can afford (or even better, I have a higher income by then! )
Thanks for anyone who has the time to chat! If you have any critique or advice on drive layout, or pool/dataset options I should be using, also welcome!
I'll be running email, contacts syncing, a influxDB (v1) instance, HomeAssistant, Plex, and probably add a federated social network node in at some point, though not really thought about it other than "hey, I should do that sometime"