cluster ceph

  1. K

    Cluster setup and HA configuration suggestions

    Hi all, I’m putting together a Proxmox cluster with Ceph for HA and wanted to get some feedback before I go ahead and deploy everything. What I’m aiming for is fairly simple: I want proper HA with no data loss and automatic failover, but at the same time I’d still like one node (an R640) to...
  2. M

    Recommended hardware for modest upgrade of 3 PVE nodes

    Dear all, I am running PVE in a what i consider being a rather typical "IT nerd" setup, a 3-node-Cluster on consumer hardware. Private use only. 2 nodes are built identically: AMD 3200G (2C/4T) APU, 32GB DDR4 RAM, M2 NVMe for OS, 2x4TB HDD for storage (connected to SATA onboard), GbE NIC 1...
  3. J

    Ceph full-mesh ( no switch) performance issues when live migrating Windows VM

    Hello Everyone I seem to struggle when it comes to the Ceph full-mesh cluster created. I made one using a bond on each node using the fiber NIC's two interfaces of 25Gbps fiber card. Later I made an additional corosync link, not sure if this picks up if the other is down. I have created a pool...
  4. D

    [SOLVED] Proxmox Cluster 3 nodes, Monitors refuse to start

    I posted this in the wrong section before so i am posting this here hoping is the right place. Hi all, i am facing a strange issue, after using having a proxmox pc for my self hosted app I decided to play around and create a cluter to dive deeper into the HA topics, i dowloaded the latest ISO...
  5. P

    [SOLVED] Ceph not working / showing HEALTH_WARN

    Hi, I'm fairly new to Proxmox and only just set up my first actual cluster made of 3 PCs/nodes. Everything seemed to be working fine and showed up correctly and I started to set up Ceph for a shared storage. I gave all 3 PCs an extra physical SSD for the shared storage additional to the Proxmox...
  6. P

    CEPH Erasure Coded Configuration: Review/Confirmation

    First, let me contextualize our set-up: We have a 3 node cluster, where we will be using CEPH for storage hyperconvergence. We are familiarizing ourselves with CEPH and would love to have someone more experienced chiming in: All of our storage hardware are SSDs. (24x 2TB NVMe, 8 per server)...
  7. L

    VMs not migrating when Ceph is degraded in 3-Node Full-Mesh Cluster

    Hello Community, I am currently setting up our new 3-Node Proxmox Cluster, pretty new to Proxmox itself. We are Using Full-Mesh with 25Gbit/s cards for Ceph, 10Gbit/s cards for Coro/VMBR and 18 (6 per Node) SATA 6G Enterprise SSDs. Ceph performance took a bit of testing, but we are now at a...
  8. B

    [SOLVED] [DUPLICATED] [TO BE DELETED] Proxmox VE 9.0.3 + Ceph 19.2 Cluster: OSDs Services don't come up after node reboot

    Hi all, I'm new to Proxmox, for many years I've worked with VMware, but I'm start to enjoying Proxmox very much and now I'm migrating all the solutions I manage to Proxmox. One of those solutions is a VMware cluster that I intend to replace with a Proxmox Cluster. So, I got 3 servers, with...
  9. A

    [SOLVED] qm remote migrate from single node to cluster from zfs storage to ceph storage getting error "...invalid bootorder: device 'sata0' does not exist'"

    Hi folks, I'm trying to manually migrate (live migration) of a VM from a single node proxmox to a proxmox cluster. I have tried many different things, like checking the fingerprint, permission on the token, tried with vm online and offline, and many other but I can't get rid of the error device...
  10. M

    Ceph monitor "out of quorum" on 3 node cluster, can I remove and readd?

    I have a 3 node proxmox cluster running Ceph. Recently is gave a warning that one of the three monitors is down or "out of quorum". root@pve-02:~# ceph -s cluster: id: f9b7ff0a-17b9-40d8-b897-cebfffb0ee8d health: HEALTH_WARN 1/3 mons down, quorum pve-01,pve-03...
  11. G

    Free Range Routing can't see node's neighbours

    Let me preface this by saying I'm not a networking engineer, I have a technical background but in this area I'm a hobbyist; this is my first post here, please tell me if I'm doing anything wrong, and English is not my first language, so forgive me if something is not clear or is phrased poorly...
  12. A

    [SOLVED] Ceph says osds are not reachable

    Hello all, I have a 3-node cluster set up using the guide here: https://packetpushers.net/blog/proxmox-ceph-full-mesh-hci-cluster-w-dynamic-routing/ Everything was working fine when using Ceph Quincy and Reef. However, after updating to Squid, I now get this error in the health status...
  13. M

    [SOLVED] On node crash, OSD is down but stays "IN" and all vm's on all nodes keep in error and unusable.

    Hello, I work for multiple clients and one of them wanted us to create a Proxmox cluster to assure them fault tolerance and a good hypervisor that's cost-efficient. It's the first time we put a Proxmox cluster in Production environment for a client. We've only used single node proxmox. Client...
  14. A

    Having some issues with creating an RBD storage

    Hello everyone, So, I've decided to give proxmox cluster a go and got some nice little NUC-a-like devices to run proxmox. Cluster is as follows: Cluster name: Magi Host 1: Gaspar VMBR0 IP is 10.0.2.10 and runs on eno1 network device vmbr1 IP is 10.0.3.11 and runs on enp1s0 network device...
  15. C

    Limit the max number of VMs per host in HA environment

    TL ; DR - I don't care which host the High Availability VMs move to, as long as it doesn't exceed 4 VMs per host. Can this be done? Full Story - I am planning out a 4 host cluster (and 1 QDevice) of PVE with HA active using CEPH Storage and plan to run 8 Windows Server VMs across all hosts. I...
  16. A

    Out of RAID disks are not being detected

    Hello, I'm new to Proxmox. So, if any term is wrong help me out. I have a HPE Proliant DL380 G6 with 7 disks at my lab. I have installed Proxmox on two of the disks with RAID 10. I have left the other ones out of the RAID as I want to test Ceph for my environment. The other five disks are not...
  17. B

    10GBE cluster without switch

    I have bought 3 identical servers, both have a 10gbe network adapter with dual ports. I want the cluster to communicate via the 10gbe network adapter but each individual server via 1gbe port to the router. If one of the nodes fails, i want it to fallback to the next server. So far, i have...
  18. F

    Connection refused from 2 of 4 nodes on a cluster

    Hi, i get the error message "595 Connection refused" if i try to manage 2 of a 4 nodes cluster. This is a production cluster and every node comes with 1 dedacted 10gig nic which one with 2 ports, one for HA and one for Ceph. The manage network is on default 1gig nic, checking on logs i'll see...
  19. E

    Proxmox 3 node cluster HW suggestion

    Hi all, I am trying to build a local 3 node Proxmox cluster with Ceph for HA These would be the 3 server nodes MBO: ASUS ProArt B650-CREATOR CPU: AMD Ryzen 9 7900 MEM: 128GB DDR5 PVE SSD: 2 x Micron 7400 PRO M.2 1.92TB on MBO Ceph storage: 4 x Micron 7400 PRO M.2 1.92TB on ASUS Hyper M.2...
  20. F

    Partitioning approach for ceph cluster installation on single SSD drive

    Hello folks, totally new to proxmox clustering and just recently got 2 mini pcs for a home lab. As I was setting up the ceph cluster installation when I got to creating the osd, i couldn't target any disks as the single 1TB SSD I have is in use by proxmox. I followed the default configs for...