[SOLVED] proxmox ceph ( Nautilus to Octopus) upgrade issue

ilia987

Active Member
Sep 9, 2019
275
13
38
37
As part of upgrading to proxmox 7 i done the folling steps:
after running ( ceph osd pool set POOLNAME pg_autoscale_mode on) on a pool it starteing to copy should end in 5-10 hours)

the issue i found is such:
i have 4 nodes with ceph configured and round 10 more withput ceph.
all nodes (4+10) have ceph storage access and they mount lxc vms from there
after the updated but only nodes with CPEH installed have access to ceph storage, the othres have question mark icon near the ceph storage )

before the upgrade i had access to ceph from all nodes in the proxmox cluster
 
I'll quote the upgrade guide:
Note: As said, that will cut-off any old client after the ticket validity times out (72h), so only execute that once the client warning was resolved and disappeared.
This means after setting ceph config set mon auth_allow_insecure_global_id_reclaim false older clients can no longer access your cluster.
To fix this add the ceph-octopus repository to your non-ceph nodes and run apt update followed by apt full-upgrade
 
update:
after upgrade to proxmox 7 it solved
and after updating ceph to 16(pacific) it still works
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!