I have a test cluster of 3 VMs in virtualbox on my pc.
Their IPs changed as follows:
A: 172.16.0.150 > 192.168.100.1 (NOT a gateway)
A: 172.16.0.151 > 192.168.100.2
A: 172.16.0.152 > 192.168.100.2
Now I get the following message when i run 'service ceph-mon@local status':
And this is driving me mad as it downed ceph on all 3 nodes with the same error.
I updated the address in the /etc/ceph/ceph.conf (which is just a symlink to /etc/pve/ceph.conf). It is as follows:
I rebooted the node multiple times, same error. I scrubbed the whole system of every instance of 172.16.0.150, same error.
IT is still a fresh cluster that I am using to test ceph quincy on proxmox 7.3 prior to going into production. Tho at the moment i am weary.
Any advise here?
Their IPs changed as follows:
A: 172.16.0.150 > 192.168.100.1 (NOT a gateway)
A: 172.16.0.151 > 192.168.100.2
A: 172.16.0.152 > 192.168.100.2
Now I get the following message when i run 'service ceph-mon@local status':
Code:
Processor -- bind unable to bindto v2:172.16.0.150:3300/0: (99) Cannot assign requested address.
And this is driving me mad as it downed ceph on all 3 nodes with the same error.
I updated the address in the /etc/ceph/ceph.conf (which is just a symlink to /etc/pve/ceph.conf). It is as follows:
Code:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 192.168.100.1/24
fsid = bbc8efc5-af69-4460-8e18-c5d5e76d0c9e
mon_allow_pool_delete = true
mon_host = 192.168.100.1
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 192.168.100.1/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mon.localhost]
public_addr = 192.168.100.1
I rebooted the node multiple times, same error. I scrubbed the whole system of every instance of 172.16.0.150, same error.
IT is still a fresh cluster that I am using to test ceph quincy on proxmox 7.3 prior to going into production. Tho at the moment i am weary.
Any advise here?