High I/O wait time on VMs after Proxmox 6 / Ceph Octopus update

Oct 2, 2018
33
1
11
34
Hi,
a few days ago we upgraded our 5 nodes cluster to Proxmox 6 (from 5.4) and Ceph to Octopus (Luminous to Nautilus and after that Nautilus to Octopus).

After the upgrade we noticed that all VMs started to raise alerts on our Zabbix monitoring system with reason "Disk I/O is overloaded". This alerts was triggered due to high I/O wait on VMs, this is the graph of a VM to be clear:

Schermata da 2021-08-03 18-11-51.png

It is possible to clearly see the moment where we completed the upgrade: all those spikes with I/O wait are 3 times higher. The situation is the same for all other VMs.

Any idea why? Anyone had the same problem after the upgrade?

Many thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!