Thanks @ph0x I have done this task and the cluster restored quorum as expected; New issue I am experiencing after this is that ceph is not working properly.
When hitting UI for ceph it gets timed out, and when checking services running I can see that ceph-osd.target is running and ceph.service...
What is your current network setup for ceph? I'm not sure but it sounds like you have both ceph and corosync running on the same network if ceph isn't already separated to another network.
I'm not completely sure about LACP, but if you have corosync and ceph running on the same network &...
I've got one node in my 5 node ceph cluster with a failing OS disk.
I have removed all VMs from this node and am looking to use clonezilla to clone this disk to a new disk which is the exact same size; 240GB kingston SSD to 240GB Crucial SSD.
I am wondering if I should remove this node from...
My apologies, some information was skewed in this initial post for privacy reasons; Regardless, this was solved by fixing a misconfiguration within:
/etc/hosts
Hello, we are currently working on rebalancing our disks across our production environment. This required removing a node from the cluster, and reinstalling PVE on a new OS disk. This task went alright, and was able to be rejoined into the cluster.
The node is available on its proper IP @...
auto vmbr0
iface vmbr0 inet static
address 192.168.2.20
netmask 255.255.255.0
gateway 192.168.2.2
bridge_ports ens33
bridge_stp off
bridge_fd 0
Are you sure this is the correct gateway? Generally x.x.x.1 is gateway address, not 2.
I would give deeper insight, but would recommend just updating...
It may be best to setup an NFS for storing ISOs, in the case of a serious failure you will be without your ISOs files for restoration. On another note, remember that MANY games will not actually run under a windows VM due to anti-cheat software complaining about virtualization; Just another...
This makes perfect sense thanks a lot for this! Just for confirmation, I don't see this osd_scrub_during_recovery within my ceph.conf
Is there another place where I may be missing this configuration? Regardless, I will mark this thread as solved.
Thanks again!
This week we have been balancing storage across our 5 node cluster; Everything is going relatively smoothly but am getting a warning in CEPH:
"pgs not being deep-scrubbed in time"
This only began happening AFTER we made changes to the disks on one of our nodes; CEPH is still healing properly...
I believe that this is safe to remove, I just am not 100% on why this is allocated as we are using ceph?
Sorry if this is a noob question, I am quite familiar with linux, but still getting accustomed to PVE.
After changing my boot device on one of my nodes, from a 400GB SSD, to a 240GB SSD I am looking to free up some data in LVM as I must do this on 4+ other nodes.
While looking through configurations I notice that LVM-Thin is using 154GB of reserved space, using 0 actual space, and 8MB of...
Our team has experienced this twice now, when switches are rebooted during a network upgrade it causes all nodes in the cluster to reboot. I am unable to find information regarding this.
Note that Watchdog is turned off.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.