Hi all,
Been playing with proxmox installs via pxe booting, and setting up new hardware which is a dell c6100 with each node having a single SSD + 4 SAS spinning rust each.
It's a budget setup with standard dual port 1Gb ethernet cards. I'm curious as to the best setup here, given that I can't hope for much for much performance from ceph with that network, and it is advised to use raw disks for ceph.
Should I install to ZFS pools on the spinning rust and later configure CEPH to use the SSDs? Ideally I'd like to have two ceph pools, fast and slow. But maybe the network bottleneck will mean that it will never be better than slow anyway.
Or would it be more sensible to use SSDs for proxmox install and have CEPH only provide a glacial backup pool?
Maybe there are some choices I have not considered such as partitioning the SSDs and splitting between ZFS and CEPH as it probably takes quite a lot to choke up the SSD bandwidth. What kind of problems would this cause?
The future load will be kubernetes clusters with web applications with clustered databases, message queues etc. Priority is reliability and redundancy.
Appreciate any advice here
Been playing with proxmox installs via pxe booting, and setting up new hardware which is a dell c6100 with each node having a single SSD + 4 SAS spinning rust each.
It's a budget setup with standard dual port 1Gb ethernet cards. I'm curious as to the best setup here, given that I can't hope for much for much performance from ceph with that network, and it is advised to use raw disks for ceph.
Should I install to ZFS pools on the spinning rust and later configure CEPH to use the SSDs? Ideally I'd like to have two ceph pools, fast and slow. But maybe the network bottleneck will mean that it will never be better than slow anyway.
Or would it be more sensible to use SSDs for proxmox install and have CEPH only provide a glacial backup pool?
Maybe there are some choices I have not considered such as partitioning the SSDs and splitting between ZFS and CEPH as it probably takes quite a lot to choke up the SSD bandwidth. What kind of problems would this cause?
The future load will be kubernetes clusters with web applications with clustered databases, message queues etc. Priority is reliability and redundancy.
Appreciate any advice here