Hi guy
After destroying the osds, it doesn't allow me to rejoin them because it tells me that it is a member of an LVM group. but neither lvscan, nor vgscan nor ceph lvm ls shows said membership.
The OSD Add buton doesn´t display the disk to add.
This show the membership.
but, this tells...
I have realized that my DB partitions are sized to 1GB (my bad).
I have one SSD 480 Gb for two DB/wall partitions that belongs to two 6TB hard drives.
After a warning of spillover, i discovered the 1 gigabyte partitions.
Then my plan was resize to 220Gb each, but because the spillover that is...
2/1 in a test enviroment it's OK,
In a production system it's a noway. If you lost one one, nothing happens, but if something occurs before the rebuild, you are game over. Be ready for a mass restauration for backups, because you will have data losses for sure.
With 3/2 , if you lost a second...
please post : ceph -s ,ceph health detail , pvecm status.
So yo dont have to rebuild the pool, just increment the replica size to 3.
Something like
ceph osd pool set POOL_NAME size 3
Gotcha!!!
It was a parameter : mon osd min in ratio = 0.75, modifing this to 0.70 permit me to lost more OSD and mark it as Out.
Thank you for your help.
Can anybody test this?
I'm testing this:
1.- If one or several OSD in one HOST goes down in a host, they work as expected, marked down, and afeter 600seg marked as out. The ceph rebuilds itself.
2.- If one host goes down, the all OSD in that host are marked as down, and after 600seg marked...
Yep, I know.
But one of the OSD never goes to down/out (from down/in). All the other became marked as out, and start recovery, but one osd still faulty.
Hi.
I have a problem.
I am testing a 7 node cluster.
Each node has 1 nvme (4 OSD) and two hd (2 OSD), so 6 OSD each node.
There are two replication rules (type nvme and hd) and two pools (fast and slow) acording to the rules.
All its ok, but when i shutdown one one, and thereafter another...
OK, Issue resolved.
Was my fault
Igmp querier not enabled in vlan at the switch. I changed the vlans and forget to enable it.
I apologize for the inconvenients caused.
Thank you very much for the waste of time and brain.
I will test the omping for 10 minutes...
there is no bridge in corosync interfaces....
more /etc/network/interfaces
auto lo
iface lo inet loopback
auto enp175s0f0
iface enp175s0f0 inet static
address 10.9.5.156
netmask 255.255.255.0
#Corosync RING0
auto enp175s0f1
iface...
I have a 7 node cluster.
Corosync configured with 2 rings on two diferent ethernet interfaces.
All is Running OK
But when i shut down a node, (example node4) the cluster notifies the shutdown and goes well.
But, when i power up the node4 again, then all the cluster is down...
Is it necessary to declare the ovswitch port (OVS inport) in the proxmox host in order to use taged VLANs in the VMs?
I mean:
I have this test configuration:
allow-vmbr0 bond0
iface bond0 inet manual
ovs_bridge vmbr0
ovs_type OVSBond
ovs_bonds enp176s0f0 enp176s0f1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.