Hi,
Yesterday an updated PVE7 node crashed - Dell R440 - Xeon(R) Silver 4210 CPU @ 2.20GHz (2 Sockets) with 256 GB RAM
The weird thing is that network was still OK - and corosync said that all 6 nodes are reachable.
the GUI showed involved node grayed - its Vms continue to respond to ping...
Thanks seems to be OK on my VirtualBox test environment
I just forgot the following :
- Migrate all VM to another node
- Stop all OSD and then remove them from config
- remove existing ceph monitor/manager
- shutdown the node.
- remove node from cluster
- remove host from ceph config => ceph...
Hi,
I recently upgraded a 6 nodes Ceph PVE cluster 6.4 to 7
We discovered higher CPU usage, performance issues.
I want to recreate all cluster.
What I plan to do for each node :
- Migrate all VM to another node
- Stop all OSD and then remove them from config
- shutdown the node.
- remove...
Hi,
Thanks for your answer.
First, aio=threads seems to fix the issue. No complains this morning !
I was pretty confident on the fact that issue should be related to PVE7. The complains involved all relatively high loaded LAMP servers on the updated cluster.
Mysql queries were incredibly slow...
I can confirm issue with all LAMP server (debian based) since PVE7 upgrade
Dont know if its related to kernel or qemu
I will move VM to pve 6.4 in order to confirm
Hi,
I recently upgraded a 6 nodes PVE cluster to PVE7
cat /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
performance
Users report me some issues (lag, freezed browsers for 1 min) for several LAMP VMs...
Everything seems to be ok inside the VMs including pve nodes ressources...
I'm...
Thanks,
Yes PVE nodes are connected to the same stack of 2 switches.
for each node :
- LACP 2 x 10Gb (VM bridge + Corosync ring1)
- LACP 2 x 10Gb (Ceph + Corosync ring0)
Yes all VMs use HA (default config).
Hi,
I plan to upgrade a switch stack (2 switches) which means a downtime of 5 min (or more) I guess.
6 PVE Ceph nodes are connected to this stack, with nearly 80 KVM VMs in HA.
Even with shutting down all VM for 10 min, I think that PVE nodes would reboot, and I don't what could be the...
Hi,
I recently noticed that my debian 10 VMs reports high disk usage (80-100%) with atop.
Here are some screenshots for the same vm (completely idle) ran by several pve kernels
Kernel 5.4.114
Kernel 5.4.106
Kernel 5.4.73
any ideas ?
Running some basics 'dd' tests writes inside the vm...
Hi,
I'm probably affected by this issue too.
I also noticed for idle VMs on debian 10 latest kernel 4.19.117 : atop => DSK | sda | busy 97% with avio > 200 ms
Had to use openvswitch because of broadcom cards
=>...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.