ceph 19.2.3

  1. A

    vm migration error on node reboot

    I'm getting vm migration error during node reboot due to kernel update on a 3 node hyperconverged cluster with ceph installed. Error is here: Cleanup after stopping VM failed - org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not...
  2. C

    Monitoring Ceph status in Zabbix

    Hello all, I'm running a Proxmox 9 cluster with Ceph Squid. I want to monitor the status of Ceph with my Zabbix install. There is a nice template for Zabbix, which automatically adds OSDs and pools to the monitoring, if I add them in Proxmox. It detects that all by itself. Also it has a ton of...
  3. S

    duplicate ceph mon/mgr

    Hello :) my ceph dashboard shows duplicate entries for the ceph mon/mgr on node01 for some reason, it doesnt seem to affect anything but Id like to get rid of it :D pveversion 9.0.10 ceph 19.2.3 I found this thread with almost the same problem as me, changing /etc/hostname from fqdn to...
  4. D

    Proxmox Cluster 3 nodes, Monitors refuse to start

    Hi all, i am facing a strange issue, after using having a proxmox pc for my self hosted app I decided to play around and create a cluter to dive deeper into the HA topics, i dowloaded the latest ISO and build up a cluster from scratch. My Cluster works, i can see every node, my ceph storage says...
  5. J

    Disk throughput Windows on ceph backed storage

    I have a lab with a recent CPU on the proxmox (v. 9) host backed by ceph storage. Proxmox boots from nVME. Ceph (19.2.3) runs on separate hardware on 10Gbit links, and for the lab we're just using consumer grade sata SSDs. All in all it works very well. I am now benchmarking. I installed a...
  6. A

    CEPH Experimental POC - Non-Prod

    I have a 3 node cluster. Has a bunch of drives, 1TB cold rust, 512 warm SataSSD And three 512 non-PLP NVMe that are Gen3. (1 Samsung SN730 and 2 Inland TN320) I know not to expect much - this is pre-prod - plan is to eventually get PLPs next year. 10Gb Emulex CNA is working very well with FRR...
  7. H

    Ceph rebuild on a cluster that borked after ip change?

    First time poster. please be gentle. I have a 5 node cluster that I recently (stupidly) changed the IP of every node through a process I cannot remember (thanks alcohol) Node Makeup (not including PVE installed drives: node 1: 22 1TB OSD drives node 2: 23 500GB OSD drives node 3: 16 16GB OSD...