Hi David,
I know this thread is old, but are you still experiencing this behaviour on your cluster, as we are seeing very similar behaviour on our setup.
KVM's are running latest kernels and ceph cluster is running ceph Jewel on Centos 7.
kvm0v0 (Stretch) ~ # cat /etc/apt/sources.list.d/proxmox.list
# This file is managed by Puppet. DO NOT EDIT.
# proxmox
deb http://download.proxmox.com/debian/pve/ stretch pve-no-subscription
So all of these VM's are "long lived" / production VM's and running for the majority of the time.
The Ceph storage runs on Centos 7 and makes use of the provided packages by Ceph. We have also increased the verbosity level (level 5) on the following subsystems:
* osd
* rbd
* filestore
From...
Hi All,
We have a situation where we are currently experiencing a massive amount of "socket closed " messages for Ceph OSD's on our KVM Proxmox heads.
Current KVM Proxmox infrastructure:
DELL R710
Dual CPU's
Dual 10Gbps bonded copper connection to the CEPH storage (Current ceph version =>...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.