I've been using a HCI cluster in my home lab built from really small and low-power devices, mostly because they've become potent enough to host these various 24x7 services I've been accumulating.
I've used Mini-ITX Atoms and NUCs and am currently trying to transition a HCI cluster made from three Intel NUCs from RHV/oVirt to Proxmox.
These NUCs come with 4-6 Intel "P" cores, 64GB RAM, 1/2.5+10G Ethernet and [only] one big fast NVMe module for all storage, quite enough capacity for what I need.
For functional and failure testing of HCI clusters I prefer to use a big system running VMware workstation with nested virtualization for quicker turnaround and recoveries.
After setting up three (VMware) VMs with nearly identical specs (4 cores, 32GB RAM, 1NVMe, 1 10G NIC, UEFI boot), the first thing I noticed was that CEPH installation and configuration in the GUI only works with full drives for the OSD creation. Even if you leave space on your primary devices (I let it use 10% for a local ZFS), there is no facility to create a partition for CEPH use nor will it (always) accept an existing partition for OSD use.
Using parted and
works well enough, even if I was slightly afraid that it would perhaps thrash the NVMe (and was therefore glad it was virtual and recoverable via a snapshot).
But it seems a rather more common scenario these days of giant but few NVMe storage devices, so I wonder if it shouldn't be something that the GUI supports by default?
Or did I just miss a way of doing it via the GUI?
I've used Mini-ITX Atoms and NUCs and am currently trying to transition a HCI cluster made from three Intel NUCs from RHV/oVirt to Proxmox.
These NUCs come with 4-6 Intel "P" cores, 64GB RAM, 1/2.5+10G Ethernet and [only] one big fast NVMe module for all storage, quite enough capacity for what I need.
For functional and failure testing of HCI clusters I prefer to use a big system running VMware workstation with nested virtualization for quicker turnaround and recoveries.
After setting up three (VMware) VMs with nearly identical specs (4 cores, 32GB RAM, 1NVMe, 1 10G NIC, UEFI boot), the first thing I noticed was that CEPH installation and configuration in the GUI only works with full drives for the OSD creation. Even if you leave space on your primary devices (I let it use 10% for a local ZFS), there is no facility to create a partition for CEPH use nor will it (always) accept an existing partition for OSD use.
Using parted and
Code:
pveceph osd create /dev/sd[X]
But it seems a rather more common scenario these days of giant but few NVMe storage devices, so I wonder if it shouldn't be something that the GUI supports by default?
Or did I just miss a way of doing it via the GUI?