Cheph issue on Proxmox

Kbtech

New Member
Nov 25, 2024
2
0
1
Hi,
Not sure if this is the correct place to ask these questions, but here goes. Please correct me if I am wrong.

I am running a three node cluster with proxmox 8.3.0, a 10gbe mesh network to run ceph. I was using ceph "Quincy" and decided to upgrade to ceph "Squid". All went well, all ceph components are on V19.2 but there is an error saying "12 out of 12 OSD's are unreachable". With further digging I found this error "osd.xx's public address is not in 'fc00::1/64' subnet for each OSD

I would have thought this type of error would have an impact on performance or operation, but no, all seems to be working OK. It would be nice to clear the error. Any help or insight would be much appreciated.

Kind Regards
Brett
 
While I do not have a solution to offer to this issue, I can confirm my setup is experiencing it as well after upgrading to Squid from Reeph. I have tried a number of tweaks to the ceph configuration including setting public_network and cluster_network to much wider subnet masks (/16, /8) without any effect.

I can also confirm that my setup appears to be otherwise healthy -- performance seems nominal.
 
Thanks for your reply. Keep an eye out, I will post here if I can find a solution. It's good to hear that I am not the only one suffering this issue.
 
hi, yes this is documented problem (ceph bugzilla), there are also others , in short i have to disable ipv6 on ceph, only ipv4 works ok
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!