Again popular problem with showing red nodes

usha_cow

New Member
May 17, 2017
6
0
1
34
Hi! I've read many posts about it. But my problem occurred after upgrade Ceph to Luminous 12.2.1 version.
But I'm not sure about it, because it was upgraded not by me. And I've seen that one node rebooted today. Restarting pve-cluster, pvedaemon, pvestatd, pveproxy simultaneously on nodes turns green all my nodes only for 4-5 minutes. And then all of them turn red again. No errors in logs. What can I do else? Or it's some incompatibility of proxmox and ceph versions? And sometimes I see that I cannot show list of images in my ceph storage from pve console. I think error connects with problem of unavailable storage. How can I debug connectivity with storage? All my VMs work. So I have:

proxmox-ve: 4.4-78 (running kernel: 4.4.35-2-pve)
pve-manager: 4.4-5 (running version: 4.4-5/c43015a5)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.35-2-pve: 4.4.35-78
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-102
pve-firmware: 1.1-10
libpve-common-perl: 4.0-85
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-71
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.1-1
pve-container: 1.0-90
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.6-5
lxcfs: 2.0.5-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
 
Last edited:
Thanks for reply! I've not mentioned, we have ceph storage on other physical nodes and they are created by usual installing of ceph, not by pve utils. How can I debug connection of proxmox with ceph storage? It's only problem of pve console I think
 
In our ceph deployment there is only one ceph manager. Could it be the reason of problem?