Hi there,
i have an issue with zfs on Proxmox VE 8.1.3
I have 3 zfs pools, one mirror consisting of 2 nvme ssd, one raidz1 consisting of 4 sata hdd and one raidz1 consisting of 4 sata hdd.
Whenever there's high io load on of my hdd pools from one vm i get an io delay of about 60% in the overview and a cpu usage of around 5%.
Now the wired thing is that other vm's that only run on the nvme mirror get completly unusable even though there is no load on that pool and the same goes for the other hdd pool.
How can the IO on one pool slow down all other pools?
Also can't be the sata controller since the nvme drives use different pcie lanes directly to the CPU.
The server has an EPYC 24 core cpu and 512GB memory.
Arc cache is using half of that and total usage is 360.
i have an issue with zfs on Proxmox VE 8.1.3
I have 3 zfs pools, one mirror consisting of 2 nvme ssd, one raidz1 consisting of 4 sata hdd and one raidz1 consisting of 4 sata hdd.
Whenever there's high io load on of my hdd pools from one vm i get an io delay of about 60% in the overview and a cpu usage of around 5%.
Now the wired thing is that other vm's that only run on the nvme mirror get completly unusable even though there is no load on that pool and the same goes for the other hdd pool.
How can the IO on one pool slow down all other pools?
Also can't be the sata controller since the nvme drives use different pcie lanes directly to the CPU.
The server has an EPYC 24 core cpu and 512GB memory.
Arc cache is using half of that and total usage is 360.