Proxmox = 6.4-8
CEPH = 15.2.13
Nodes = 3
Network = 2x100G / node
Disk = nvme Samsung PM-1733 MZWLJ3T8HBLS 4TB
nvme Samsung PM-1733 MZWLJ1T9HBJR 2TB
CPU = EPYC 7252
CEPH pools = 2 separate pools for each disk type and each disk spliced in 2 OSD's
Replica = 3
VM don't do many...
We have a 5-Host hyperconverged Proxmox 7.1 with CEPH as VM storage (5 SSD OSDs per host). My understanding is that CEPH I/O highly depends on available CPU power
Would it make sense to prioritize the OSD processes over any VM process to have (near-)full CPU power available for CEPH even under...
Hi guys!
I’ve a dilema with medium to big size clusters between 5 and 15 nodes. Working with ceph replica 3 and default min_size 2 if I have a two node failure the service will be interrupted but two node failure in a 15 node cluster is not difficult at all.
How dangerous do you think it is to...
Hi community,
i have created a hyperconverged ProxMox Cloud Cluster as an experimental project.
features:
using 1blu.de compute nodes: 1 Euro / month
using 1blu.de storage node 1 TB: 9 Euro / month
using LAN based on Vodafone Cable Internet IPV6 DS-LITE (VF NetBox)
the ProxMox cluster uses...
An idea has been percolating in my mind for some time now... My PVE hyperconverged CEPH cluster does not perform how I would like it to and I am considering non-hyperconverged cluster options for future growth (capacity). We have two "hadoop" high density SuperMicro HDD Nodes, purchasing a third...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.