DiskIO in CT missing

proxbear

New Member
Apr 25, 2020
20
1
3
50
When I click the "Summary" in the web-admin UI at a container, I see no DiskIO, i.e. both the read/write == 0. The filesystem is ZFS, both for the VM and for the mount-point, being the target for the heavy IO work.

The CT is running a quite exhausting task for both the CPU and IO and normally, it should show pretty much IO. I only see the CPU is around 50% idle, which I interpret as waiting for IO; but also the 'top' shows no IO waiting time.

I can see the heavy IO only from the host, both from 'zpool iostat 1' and 'iostat -dx 1'. But the CT sees no IO, which makes the bottleneck analysis quite hard.

Thank you for any insides ;-)!
 
Can you see disk IO for any other container (using dd conv=fsync ...)?
 
Last edited:
Thank you for writing! I could reproduce your problem. You can add yourself to the CC list in the bug report if you wish to get notifications about the progress.
 
  • Like
Reactions: blackpaw
The Graph for my VM ist ok.

Measuring IO for containers and virtual machines works differently. For virtual machines you can just run something like htop and look at the usage of the KVM process. Please see the bug report, too.

But I am not 100% sure if you have the same problem as ZFS & disk IO did not work in PVE 5 either I think. The initial bug report was for version 5.
 
I got the same problem. I created my first LXC for running Graylog (that I had been running in a VM before) and there is no "Disk IO" in the summery tab. Running iostat on the host or the LXC is also not working right. Before I got two zd1234 devices writing around 300-500kb/s (which one of it was Graylog) and mow after I migrated to the LXC there is only one device writing that much.

Is there some way to check what zd1234 device is which zvol?
 
Can confirm missing Disk IO info on Summary of LXC.

proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
 
Unfortunately, this problem does not seem to be fixed yet. Is there any documentation on how to customize these graphs.

It actually provides all the information for the graph:

# zfs list -o name,objsetid -H -p

But apparently not evaluated properly.
 
Same here:

pve02_nolxcIO.png

Version:

Code:
root@pve02:~# pveversion
pve-manager/6.3-6/2184247e (running kernel: 5.4.106-1-pve)
 
  • Like
Reactions: mhagen
Hello,

this is still an issue for me as well, and someone else I know.
ZFS on root install, was an issue before with GRUB-boot and with a fresh UEFI boot it's still not working.

pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.4-6 (running version: 6.4-6/be2fa32c)
pve-kernel-5.4: 6.4-2
pve-kernel-helper: 6.4-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-2
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.1.6-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-5
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-3
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 
Still a problem today, any news? Zfs on root install

Bash:
proxmox-ve: 6.4-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-4
pve-kernel-helper: 6.4-4
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 
Last edited:
  • Like
Reactions: mhagen
I have the same problem on 6.4. But only on zfs. My setup:

Dell r630
H730 with 4x raid1 (8 disks - HDDs and SSDs) - lvm thin - VMs/Containers created on those have valid disk usage statistics
GIGABYTE 4xM.2 to PCIE adapter with 4 disks - 2x zfs MIRROR with 2 disks in each mirror - the statistics are always zero

I have also noticed that when I open Disks > ZFS > Details on one of disk > Read and Write columns are always 0

1629449177031.png

what is strange because when I run the command
Code:
zpool iostat 1
I can see the usage

Bash:
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
nvme1       36.4G   202G      0     59  25.6K  1.48M
nvme2       3.54M   238G      0      0      6  3.86K
----------  -----  -----  -----  -----  -----  -----
nvme1       36.4G   202G      0      7      0   224K
nvme2       3.54M   238G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
nvme1       36.4G   202G      0    254      0  3.39M
nvme2       3.54M   238G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
 
For Me Too,
Kernel Version

Linux 5.4.34-1-pve #1 SMP PVE 5.4.34-2
PVE Manager Version

pve-manager/6.2-4
Using ZFS
 
Still not working for me.
VM's are properly showing disk IO, LXC is not.

proxmox-ve: 7.0-2 (running kernel: 5.11.22-4-pve)
pve-manager: 7.0-13 (running version: 7.0-13/7aa7e488)
pve-kernel-helper: 7.1-2
pve-kernel-5.11: 7.0-8
pve-kernel-5.4: 6.4-4
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.11.22-3-pve: 5.11.22-7
pve-kernel-5.11.22-2-pve: 5.11.22-4
pve-kernel-5.11.22-1-pve: 5.11.22-2
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve1
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-10
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-12
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.11-1
proxmox-backup-file-restore: 2.0.11-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.1-1
pve-docs: 7.0-5
pve-edk2-firmware: 3.20210831-1
pve-firewall: 4.2-4
pve-firmware: 3.3-2
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-4
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-16
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!