Hi,
I have a Proxmox standalone installation and have high I/O when working in any instance running in raid ssd.
When I'm working in any VM/lxc running in the raid ssd, the VM freeze for high I/O. It only happens in the SSD raid. It freezes for a few seconds or minutes, after then it works normal for a while.
Server Specs (CPU, Memory total/usage):
I have a Proxmox standalone installation and have high I/O when working in any instance running in raid ssd.
Code:
proxmox-ve: 6.1-2 (running kernel: 5.0.21-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.21-1-pve: 5.0.21-2
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ifupdown2: residual config
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
Code:
~# zpool status
pool: nvme-zfs
state: ONLINE
scan: scrub repaired 0B in 0 days 00:09:01 with 0 errors on Sun Dec 8 00:33:02 2019
config:
NAME STATE READ WRITE CKSUM
nvme-zfs ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-SAMSUNG_MZQLB3T8HALS-000AZ_S3VJNF0K700531 ONLINE 0 0 0
nvme-eui.33564a304b7005300025384600000001 ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:02:11 with 0 errors on Sun Dec 8 00:26:13 2019
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-HP_SSD_S700_500GB_HBSA39194101315-part3 ONLINE 0 0 0
ata-HP_SSD_S700_500GB_HBSA39194102188-part3 ONLINE 0 0 0
errors: No known data errors
pool: ssd-zfs
state: ONLINE
scan: scrub repaired 0B in 0 days 00:15:51 with 0 errors on Sun Dec 8 00:39:54 2019
config:
NAME STATE READ WRITE CKSUM
ssd-zfs ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
errors: No known data errors
Code:
~# zfs get all ssd-zfs
NAME PROPERTY VALUE SOURCE
ssd-zfs type filesystem -
ssd-zfs creation Sat Aug 31 15:04 2019 -
ssd-zfs used 696G -
ssd-zfs available 4.43T -
ssd-zfs referenced 232K -
ssd-zfs compressratio 1.16x -
ssd-zfs mounted yes -
ssd-zfs quota none default
ssd-zfs reservation none default
ssd-zfs recordsize 128K default
ssd-zfs mountpoint /ssd-zfs default
ssd-zfs sharenfs off default
ssd-zfs checksum on default
ssd-zfs compression on local
ssd-zfs atime on default
ssd-zfs devices on default
ssd-zfs exec on default
ssd-zfs setuid on default
ssd-zfs readonly off default
ssd-zfs zoned off default
ssd-zfs snapdir hidden default
ssd-zfs aclinherit restricted default
ssd-zfs createtxg 1 -
ssd-zfs canmount on default
ssd-zfs xattr on default
ssd-zfs copies 1 default
ssd-zfs version 5 -
ssd-zfs utf8only off -
ssd-zfs normalization none -
ssd-zfs casesensitivity sensitive -
ssd-zfs vscan off default
ssd-zfs nbmand off default
ssd-zfs sharesmb off default
ssd-zfs refquota none default
ssd-zfs refreservation none default
ssd-zfs guid 7483453056672396458 -
ssd-zfs primarycache all default
ssd-zfs secondarycache all default
ssd-zfs usedbysnapshots 0B -
ssd-zfs usedbydataset 232K -
ssd-zfs usedbychildren 696G -
ssd-zfs usedbyrefreservation 0B -
ssd-zfs logbias latency default
ssd-zfs objsetid 54 -
ssd-zfs dedup off default
ssd-zfs mlslabel none default
ssd-zfs sync standard default
ssd-zfs dnodesize legacy default
ssd-zfs refcompressratio 1.00x -
ssd-zfs written 232K -
ssd-zfs logicalused 605G -
ssd-zfs logicalreferenced 71K -
ssd-zfs volmode default default
ssd-zfs filesystem_limit none default
ssd-zfs snapshot_limit none default
ssd-zfs filesystem_count none default
ssd-zfs snapshot_count none default
ssd-zfs snapdev hidden default
ssd-zfs acltype off default
ssd-zfs context none default
ssd-zfs fscontext none default
ssd-zfs defcontext none default
ssd-zfs rootcontext none default
ssd-zfs relatime off default
ssd-zfs redundant_metadata all default
ssd-zfs overlay off default
ssd-zfs encryption off default
ssd-zfs keylocation none default
ssd-zfs keyformat none default
ssd-zfs pbkdf2iters 0 default
ssd-zfs special_small_blocks 0 default
Code:
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Seagate BarraCud
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9C37C950-C19E-4145-9011-49BF38D7424C
Device Start End Sectors Size Type
/dev/sda1 2048 3907012607 3907010560 1.8T Solaris /usr & Apple ZFS
/dev/sda9 3907012608 3907028991 16384 8M Solaris reserved 1
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Seagate BarraCud
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 522CA783-B889-A944-95C7-69A0F6723C27
Device Start End Sectors Size Type
/dev/sdb1 2048 3907012607 3907010560 1.8T Solaris /usr & Apple ZFS
/dev/sdb9 3907012608 3907028991 16384 8M Solaris reserved 1
Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Seagate BarraCud
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F8EA8BCC-32B3-2249-8612-179F28A1C508
Device Start End Sectors Size Type
/dev/sdc1 2048 3907012607 3907010560 1.8T Solaris /usr & Apple ZFS
/dev/sdc9 3907012608 3907028991 16384 8M Solaris reserved 1
Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Seagate BarraCud
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8DBEE967-6382-2744-B32C-8954C83A7F2E
Device Start End Sectors Size Type
/dev/sdd1 2048 3907012607 3907010560 1.8T Solaris /usr & Apple ZFS
/dev/sdd9 3907012608 3907028991 16384 8M Solaris reserved 1
When I'm working in any VM/lxc running in the raid ssd, the VM freeze for high I/O. It only happens in the SSD raid. It freezes for a few seconds or minutes, after then it works normal for a while.
Server Specs (CPU, Memory total/usage):