[SOLVED] HELP - problem with ceph Cluster

starnetwork

Renowned Member
Dec 8, 2009
422
10
83
Hi,
I have ceph Cluster working on multiple nodes with 10.10.10.0/24 network
now, I added new nodes to the Proxmox cluster but this nodes was no access to the 10.10.10.0/24
and I did pveceph init --network 10.10.10.0/24
on this nodes, now after I see that no connection I added the network settings to this subnet and the network is working
the problem is that since this case, all my nodes show timeout via gui and via cli for
root@server1:~# pvesm status
got timeout
got timeout
Name Type Status Total Used Available %
CephPool1_ct rbd inactive 0 0 0 0.00%
CephPool1_vm rbd inactive 0 0 0 0.00%

but
root@server1:~# ceph status
cluster:
id: 411df041-bcd3-029b-aa09-5b91de414bac
health: HEALTH_OK

show HEALTH_OK and all nodes inside this cluster are working
but if I will try to restart on of the nodes, it's will not boot up, I also can't create backups for nodes, the error:
ERROR: Backup of VM 100 failed - rbd error: rbd: couldn't connect to the cluster!

any suggestions and what should I do to restore back the normal operational?

Regards,
 
Thank you!
I tried to remove it via UI but it mark it as red, so only way is to remove it via cli. Bug thumbs up!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!