LXC and VM Disk IO reporting is completely wrong (mostly 0) on ZFS

YungErrorHunter

New Member
Dec 10, 2023
19
2
3
At least I'm assuming this is an issue with proxmox in combination with ZFS. (This doesn't happen on another proxmox host of mine with LVM, but that is PVE 8.3.2)

I have an LXC on my ZFS (striped mirror) pool for VM/LXC data. It's currently running an IO intensive task, and this is the graph:Screenshot from 2025-12-02 12-31-41.png

Besides that one spike, it is reporting absolute 0 reads and 0 writes. Here is some output of ZFS io stats (one line per second):
Code:
dataset                                         w/s      wMB/s        r/s      rMB/s   wareq-sz   rareq-sz
      subvol-132-disk-0                     1564.27       3.22    4440.71      18.19       2.06       4.10
      subvol-132-disk-0                     1470.16       3.03    4233.56      17.34       2.06       4.10
      subvol-132-disk-0                     1398.29       2.88    5886.14      24.11       2.06       4.10
      subvol-132-disk-0                     1258.37       2.59    2798.25      11.46       2.06       4.10
      subvol-132-disk-0                     1616.32       3.33    1715.72       7.03       2.06       4.10
      subvol-132-disk-0                      702.60       1.44    6973.25      28.56       2.05       4.10
      subvol-132-disk-0                     1812.66       3.74       2.98       0.01       2.06       4.10
      subvol-132-disk-0                      552.12       1.14    8685.70      35.58       2.06       4.10
      subvol-132-disk-0                     2678.85       5.52     766.66       3.14       2.06       4.10
      subvol-132-disk-0                     1284.54       2.64    7928.12      32.47       2.06       4.10
      subvol-132-disk-0                     1591.40       3.28    7532.76      30.85       2.06       4.10
      subvol-132-disk-0                     2875.82       5.93    2117.30       8.67       2.06       4.10
      subvol-132-disk-0                     1085.46       2.24    7706.38      31.57       2.06       4.10
      subvol-132-disk-0                     1754.80       3.62    5608.60      22.97       2.06       4.10
      subvol-132-disk-0                     2191.57       4.52    3067.00      12.56       2.06       4.10
      subvol-132-disk-0                      946.31       1.95    8671.65      35.52       2.06       4.10
      subvol-132-disk-0                     2029.61       4.19       2.98       0.01       2.06       4.10
      subvol-132-disk-0                      776.52       1.60    8683.95      35.57       2.06       4.10
      subvol-132-disk-0                     2084.82       4.30       3.98       0.02       2.06       4.10
      subvol-132-disk-0                      574.14       1.20    8689.74      35.59       2.08       4.10
      subvol-132-disk-0                     2307.29       4.76       3.97       0.02       2.06       4.10
      subvol-132-disk-0                      643.92       1.32    8692.46      35.60       2.06       4.10
      subvol-132-disk-0                     2214.90       4.58       3.98       0.02       2.07       4.10
      subvol-132-disk-0                      758.79       1.56    8684.78      35.57       2.06       4.10
      subvol-132-disk-0                     2272.70       4.69       3.97       0.02       2.06       4.10
      subvol-132-disk-0                      624.46       1.29    8657.14      35.46       2.06       4.10
      subvol-132-disk-0                     2100.96       4.34       6.94       0.03       2.06       4.10
      subvol-132-disk-0                      652.98       1.34    8679.60      35.55       2.06       4.10
      subvol-132-disk-0                     1994.67       4.11       3.98       0.02       2.06       4.10
      subvol-132-disk-0                      784.83       1.62    7899.93      32.36       2.06       4.10
      subvol-132-disk-0                     2089.54       4.31     778.73       3.19       2.06       4.10
      subvol-132-disk-0                      786.54       1.62    8242.71      33.76       2.06       4.10
      subvol-132-disk-0                     1937.60       3.99     455.85       1.87       2.06       4.10
      subvol-132-disk-0                      933.14       1.92    7109.99      29.12       2.06       4.10
      subvol-132-disk-0                     1797.15       3.70    1578.59       6.47       2.06       4.10
      subvol-132-disk-0                     1076.51       2.22    6364.53      26.07       2.06       4.10
      subvol-132-disk-0                     1764.65       3.64    2328.97       9.54       2.06       4.10
      subvol-132-disk-0                     1109.82       2.29    5351.20      21.92       2.06       4.10
      subvol-132-disk-0                     1622.88       3.37    3339.30      13.68       2.08       4.10
      subvol-132-disk-0                     1267.97       2.61    6145.79      25.17       2.06       4.10
      subvol-132-disk-0                     1754.98       3.61    2548.36      10.44       2.06       4.10
      subvol-132-disk-0                     1121.59       2.31    6796.27      27.84       2.06       4.10

As you can see, it is far from zero. What's up with proxmox reporting pretty much no IO at all here?
This is the LXC's only disk, there are no bind mounts. Unpriviledged, default configs, nothing special.

This seems to happen to all my LXCs, not just this one. For VMs, writes seem to work, but reads are mostly 0 for all my VMs aswell.

Code:
~# pveversion -v
proxmox-ve: 9.1.0 (running kernel: 6.17.2-2-pve)
pve-manager: 9.1.1 (running version: 9.1.1/42db4a6cf33dac83)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.2-2-pve-signed: 6.17.2-2
proxmox-kernel-6.17: 6.17.2-2
proxmox-kernel-6.17.2-1-pve-signed: 6.17.2-1
proxmox-kernel-6.14.11-4-pve-signed: 6.14.11-4
proxmox-kernel-6.14: 6.14.11-4
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx11
intel-microcode: 3.20250812.1~deb13u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.4
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.0.15
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.3
libpve-rs-perl: 0.11.3
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-3
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.0-1
proxmox-backup-file-restore: 4.1.0-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.2
pve-cluster: 9.0.7
pve-container: 6.0.18
pve-docs: 9.1.1
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.0.8
pve-i18n: 3.6.4
pve-qemu-kvm: 10.1.2-4
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.0
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1
 
I found this thread: https://forum.proxmox.com/threads/diskio-in-ct-missing.70017/ and related bug: https://bugzilla.proxmox.com/show_bug.cgi?id=2135 which seems to be the issue I'm experiencing? However these only talk about LXC. My VMs experience similar behaviour, although my VM graphs do contain writes, but reads are always 0.

Knowing the I/O load of my VMs and containers is rather important for resource management and planning, and given how easy it is to get the data from /proc/spl/kstat/zfs/... I don't see how properly logging and displaying it is an issue..