after upgrade and system restarts we have ceph issues
at pve web page:
1- vms show no disk at hardware
2- ceph status shows no mons .
during the upgrade and restarts I has noout set. i checked each system with ceph -w one at a time until normal before upgrading the next system.
zabbix just reported some ' PROBLEM: Disk I/O is overloaded on' a system which led me to check for issues.
I'll dig in to this for more info and try to solve.
at pve web page:
1- vms show no disk at hardware
2- ceph status shows no mons .
during the upgrade and restarts I has noout set. i checked each system with ceph -w one at a time until normal before upgrading the next system.
zabbix just reported some ' PROBLEM: Disk I/O is overloaded on' a system which led me to check for issues.
I'll dig in to this for more info and try to solve.
Code:
# pveversion -v
proxmox-ve: 4.4-78 (running kernel: 4.4.35-2-pve)
pve-manager: 4.4-5 (running version: 4.4-5/c43015a5)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.35-2-pve: 4.4.35-78
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-102
pve-firmware: 1.1-10
libpve-common-perl: 4.0-85
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-71
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.1-1
pve-container: 1.0-90
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-5
lxcfs: 2.0.5-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
ceph: 10.2.5-1~bpo80+1
Last edited: