A little clarification

r0PVEMox

Member
Jun 21, 2020
13
2
8
37
Not sure if this is the right place, but currently I'm rethinking my Proxmox cluster and I just need a little clarification. This is because I recently read a bit about CEPH, and also because I discovered the Thinkpad Tiny. Since this model can carry a Low Profile PCIe NIC.

Setup:
2 x Elitedesk Mini (256GD consumer SSD for Proxmox, 500GB Samsung 970 Evo Plus for VMs)
1 x Thinkpad Tiny (Consumer SSD for Proxmox, 960GB Samsung PM883)

What I would like to know is:
I've been happily running a ZFS-cluster, but I didn't really tested HA. I live migrated VMs a couple of times however. Just in general, would this be a good setup, when I seperate regular and corosync network? The Thinkpad will arrive soon, and I have a Intel i350 Quadport ready for it. The Elitedeks only have 1 ehternet port, but I'm planning to buy a USB Ethernet adapter for this purpose. PS. I'm running Proxmox Backup Server on 1 node. It backups to ('local') USB storage and the NAS.

CEPH off course looks great, but thats the next level. I wonder if a ZFS-cluster with 2 x ethernet per node is a great setup or is there anythin else that makes more sense?
 
Hardware recommendtion for Ceph is multiple NICs and one of them 10+ Gbit. And multiple OSDs (disks) per node.
So might work but won't be great with such low-end hardware.

See:
https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster#_precondition
Thanks for you input. Ceph is not what I would setup right now. Multiple OSDs with the 'tiny' nodes is not really doable, unless Proxmox could be installed on a USB-disk. Then 2 OSDs per node would be available. I just wonder if ZFS in a cluster makes most sense with my setup. (I like the small footprint). At the moment I only have 1 ehternet port available at the Elitedisks. I'm planning to add one more for corosync.

If anything else - then ZFS / cluster - makes more sense with my current setup, I open to do so. I have a NAS and USBdisks for backup. The local disks are for Proxmox + VMs. General storage is in the cloud.
 
10G will thermal throttle in SFF.

With 1G you are limited to 80-90mb/s sequential and very very poor rand read/write performance.

Ceph is very complex I would not recommend to set it up except you want to play around with it and learn it.

It's really only for enterprise hardware with enterprise NVME and 25/50/100G nics, it's about latency not bandwidth.

I would go with ZFS replication in your case, you can still use the secondary nic as replication/migration network.