Search results

  1. 1

    HA cluster completely broken after server maintenence

    My PX/Ceph HA cluster with three nodes (PX1 PX2 PX3) needed a RAM upgrade so I shut down PX1, did my work, and powered it back up. Back at my desk I took a look at the management GUI and saw "Nodes online: 3" despite PX2 and PX3's icons in the left-hand panel showing "offline". That's odd, I...
  2. 1

    Ceph nodes showing degraded until OSD Start

    Hi everyone, Got a 3-node basic Ceph+Proxmox HA cluster set up and it worked great for roughly a day. Each node has one OSD on it taking up the space of the available RAID array, it was all working great and I spun up a couple machines to test it. Come in a day later and nodes two and three are...
  3. 1

    Totally lost with Ceph

    So I got ceph running, got OSDs on each node (three nodes), but I'm totally lost on how to properly pool them together so that I can spin up a virtual machine and have high availability. Can anyone shine some light on how to go about just spinning up VMs and having them mirrored/checksummed...
  4. 1

    Question about Ceph and partitioning host disks

    I have an incredibly horrible not-at-all-optimal cluster going on with some older HP hardware. All three have RAID 10 with a hot spare and all have Proxmox 4.4 running on them. Proxmox was installed on each computer with a 10GB limit, thus leaving the rest of each logical RAID drive unformatted...
  5. 1

    Fresh 4.4 install -- Can SSH, no Web interface (

    I'm setting up a cluster using some old Gen 5 Intel and Gen 2 AMD HPE servers (yes, I know, old stuff) running the latest version of Proxmox installed via CD. All have working RAID, network, etc. They're all in a VLAN managed by a cisco switch. I set the IP addresses, domain, etc. in the...