ceph warning post upgrade to v8

Awesome - ceph dashboard up and running! Thanks
Love it - can see all the stats and configs so clearly. I set up rules for spinner disks, nvme disks and caching tiers for example and you can see the nice little graphs to see how they are working!
 
  • Like
Reactions: Max Carrara
Not sure how i got it working, is now working, i either had a sequencing issue in dashboard install commands or it was recreating all managers that did the trick

Can someone point me to definitive instructions on how to set this up, i have been trying to follow some of the ones i found and keep hitting issues.

I have the packages installed and the dashboard enabled - i am stumbling at the self signed section of instructions...
 
Last edited:
  • Like
Reactions: Max Carrara
Not sure how i got it working, is now working, i either had a sequencing issue in dashboard install commands or it was recreating all managers that did the trick

Can someone point me to definitive instructions on how to set this up, i have been trying to follow some of the ones i found and keep hitting issues.

I have the packages installed and the dashboard enabled - i am stumbling at the self signed section of instructions...

Glad it worked out eventually!

Just FYI for future readers: If you're going through the process of installing the dashboard from scratch, you'll eventually end up on the SSL/TLS configuration page of the Ceph docs. Due to the whole PyO3 ordeal I elaborated on in my previous posts, I had to change what the ceph dashboard create-self-signed-cert does - if you run it, you'll instead receive a couple instructions that show you how to configure a self-signed cert yourself.

These instructions are almost the same as the manual steps listed in the Ceph docs (that I linked above); it's just a convenience thing. Better than having the command break, at least.

Thought I'd mention it here once again - maybe it helps someone.
 
i also observed that there is no new update is showing in both debian and proxmox for past few weeks.... is the repository has some issue, or the subscription has some issue....... unable to figure it out ? any help
 
@Max Carrara Any ETA for these patches to reach enterprise repo? Thanks!
i also observed that there is no new update is showing in both debian and proxmox for past few weeks.... is the repository has some issue, or the subscription has some issue....... unable to figure it out ? any help

The newest Ceph Quincy patches were recently pushed onto the no-subscription repo - that means what community users using Quincy should see their Ceph Dashboard working again (or be able to set it up) now.

As always, it takes a little longer until things end up in the enterprise repo, but I assure you, expect updates to come in soon.

We've got a lot of work at the moment - I just can't spoil anything yet ;)
 
  • Like
Reactions: pigpen
What about Reef? Currently all my PVE8 clusters with Ceph use Reef.
Thanks!
the cluster is also upgraded to reef, strangely i am getting those annoying log messages (ceph crash) 2 out of three server , just waiting for a solution ...... i moved all the vm to local storage ........ just to be on safe side.....
 
Last edited:
This only affects Ceph's Manager "dashboard" and "restful" modules. The storage itself is completely fine if after disabling those modules [1] ceph -s returns HEALTH OK

[1] ceph mgr module disable restful and ceph mgr module disable dashboard.
 
What about Reef? Currently all my PVE8 clusters with Ceph use Reef.
Thanks!

If you're using the enterprise repos, you'll still have to wait a little for the update. The no-subscription repos for both Reef and Quincy now contain all necessary patches for the dashboard.

That being said, we just released Proxmox VE 8.2 - the enterprise repos for PVE should already be updated. That's what I didn't want to spoil earlier. ;)

The enterprise repos for Ceph Reef & Quincy should follow relatively soon.
 
The enterprise repositories for both Ceph Quincy and Ceph Reef have been updated.

Users of the enterprise repo can now set up the Ceph Dashboard again - or discover that it's most likely up and running after updating, if they've configured it before.

Enjoy!
 
  • Like
Reactions: qwertbert
we updated the cluster and those warning messages are gone, but now a different problem arise, swap usages

root@pve-2:~# free -m
total used free shared buff/cache available
Mem: 257373 169060 49544 51 40652 88313
Swap: 8191 6040 2151

kindly guide is this will cause the system unusable ? and why so much swap usage is showing when so much memory is free, only two vm running in this node now ? kindly guide
1715481063655.png
 
Last edited:
we updated the cluster and those warning messages are gone, but now a different problem arise, swap usages

root@pve-2:~# free -m
total used free shared buff/cache available
Mem: 257373 169060 49544 51 40652 88313
Swap: 8191 6040 2151

kindly guide is this will cause the system unusable ? and why so much swap usage is showing when so much memory is free, only two vm running in this node now ? kindly guide

No, that should not make your system unusable. From what I can tell, there's nothing you have to worry about - it just means that some stuff in RAM has been put onto your disk. That in itself is completely harmless.

You can check your swappiness like this:
Bash:
sysctl vm.swappiness

You can alter that value as you wish depending on what your needs are (where x is a value between 0 and 100, higher means that the kernel will try to move more stuff to swap space):
Bash:
sysctl vm.swappiness=x

To ensure your value has been set, you can execute the first command again, or try the following:
Bash:
cat /proc/sys/vm/swappiness

Also, please open a new thread for off-topic questions the next time, thanks!
 
No, that should not make your system unusable. From what I can tell, there's nothing you have to worry about - it just means that some stuff in RAM has been put onto your disk. That in itself is completely harmless.

You can check your swappiness like this:
Bash:
sysctl vm.swappiness

You can alter that value as you wish depending on what your needs are (where x is a value between 0 and 100, higher means that the kernel will try to move more stuff to swap space):
Bash:
sysctl vm.swappiness=x

To ensure your value has been set, you can execute the first command again, or try the following:
Bash:
cat /proc/sys/vm/swappiness

Also, please open a new thread for off-topic questions the next time, thanks!

Yes.

Regular users will find that updates for Ceph Quincy are already out; Reef should follow soon, too.

Thanks for your time/work here Max, do these fixes only apply to Proxmox packaged Ceph or should they have worked down from upstream Ceph as well? I've just upgraded to 18.2.4-1~bpo12+1 from download.ceph.com/debian-reef and am seeing the same dashboard problems as well covered in this thread. This Ceph is backing a 7 node Proxmox cluster so I guess I could transition to Proxmox packaged Ceph if it's not too painful.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!