Hallo,
Wir haben ein relative großes Hyperkonvergentes Cluster. 17 server jeweils mit 8 NVMe OSDs. Proxmox Version ist 7.4.17, Kernel ist
5.15.126-1-pve. CEPH Version ist 17.2.6 (quincy).
CEPH cluster/public Netzwerke nutzen zwei eigene 40Gb/s Interfaces gebündelt in LACP bond. MTU ist 9000...
opsGood day everyone!
I have been using Proxmox for quite sometime and I'm loving it. I bought a new Dell c6420 with the plan to deploy it at a Datacenter to offload our local infrastructure(our site is super prone to electricity outage).
This is my set up:
Proxmox Packages
Kernel: Linux...
Hello,
I would like to ask you for help because I am running out of ideas on how to solve our issue.
We run 4 nodes Proxmox Ceph cluster on OVH. The internal network for the cluster is built on OVH vRack with a bandwidth 4Gbps. Within the cluster, we use CephFS as storage for shared data that...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.