Hi,
we have a few problems with proxmox and high I/O. We use ZFS in the following configuration:
The HBA is an LSI Logic SAS 9300-8i SGL and for ZIL and l2arc we use an Intel Optane 900P 280GB PCIE card. HDDs are HGST HUS724020ALS640. Proxmox version is 5.3-9.
When a guest (Ubuntu 18.04 IBM Domino btrfs) reads ~50 M/s from disk the I/O delay (node) raises up to ~30%. This high I/O has an effect on the other guests (they are still functional but the performance is bad). Is it a normal behavior? For me 50 M/s are not a high load.
Maybe you have an solution for my problem.
we have a few problems with proxmox and high I/O. We use ZFS in the following configuration:
Code:
:~# zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 33h43m with 0 errors on Mon Apr 15 10:07:13 2019
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
logs
nvme0n1p1 ONLINE 0 0 0
cache
nvme0n1p2 ONLINE 0 0 0
spares
sdh AVAIL
errors: No known data errors
The HBA is an LSI Logic SAS 9300-8i SGL and for ZIL and l2arc we use an Intel Optane 900P 280GB PCIE card. HDDs are HGST HUS724020ALS640. Proxmox version is 5.3-9.
When a guest (Ubuntu 18.04 IBM Domino btrfs) reads ~50 M/s from disk the I/O delay (node) raises up to ~30%. This high I/O has an effect on the other guests (they are still functional but the performance is bad). Is it a normal behavior? For me 50 M/s are not a high load.
Maybe you have an solution for my problem.