Hello!
I have recently started working with Proxmox VE and I'm truly loving it. The Web UI is wonderful to use and very intuitive.
I have a very simple machine with a Xeon E3-1240v2 and 16GB of RAM, and the whole system is running on a ZFS rpool with two mirrored SSDs. The two SSDs are of different makes, but I have extensively tested them before installing the system and they had satisfactory performance levels (~500MB/s on one and 350MB/s on the other). The pool does not have any issue, the disks are fairly new and they have been working more than fine as standalones in other PCs. The disks are directly attached via SATA to the motherboard ports.
However, even on the Proxmox host, I'm having terrible IO performance, to the point where everything is almost unusable.
I tried writing /dev/zero to a random temp file, and I'm getting wildly fluctuating performance levels - actually, to describe it better, it's always at ~10MB/s and it has spikes up to ~200MB/s. 10MB/s are shared among all VMs, meaning that if I run this test on two VMs at the same time, I get less than 5MB/s each.
I'm pasting some info that I think might help investigate the issue, but please tell me if more information is needed. I'm learning Proxmox VE and I'm sure I have done something wrong, I just can't figure out where - especially because in the first few weeks of uptime the machine was perfectly fine; this issue slowly arose with time (I'm about 5 months in). Thank you so much in advance!
PS: This machine is running on a cluster with another, very low-performance one that I just use as a DNS and test machine. That one is running on slow old HDDs and has a 2-core CPU, although I don't see why it would keep the main one from performing at its best.
Funnily enough, while the main one has "I/O wait" varying between 5% and 20%, the slower machine has it always around 0%-1%.
Thank you again!
I have recently started working with Proxmox VE and I'm truly loving it. The Web UI is wonderful to use and very intuitive.
I have a very simple machine with a Xeon E3-1240v2 and 16GB of RAM, and the whole system is running on a ZFS rpool with two mirrored SSDs. The two SSDs are of different makes, but I have extensively tested them before installing the system and they had satisfactory performance levels (~500MB/s on one and 350MB/s on the other). The pool does not have any issue, the disks are fairly new and they have been working more than fine as standalones in other PCs. The disks are directly attached via SATA to the motherboard ports.
However, even on the Proxmox host, I'm having terrible IO performance, to the point where everything is almost unusable.
I tried writing /dev/zero to a random temp file, and I'm getting wildly fluctuating performance levels - actually, to describe it better, it's always at ~10MB/s and it has spikes up to ~200MB/s. 10MB/s are shared among all VMs, meaning that if I run this test on two VMs at the same time, I get less than 5MB/s each.
I'm pasting some info that I think might help investigate the issue, but please tell me if more information is needed. I'm learning Proxmox VE and I'm sure I have done something wrong, I just can't figure out where - especially because in the first few weeks of uptime the machine was perfectly fine; this issue slowly arose with time (I'm about 5 months in). Thank you so much in advance!
PS: This machine is running on a cluster with another, very low-performance one that I just use as a DNS and test machine. That one is running on slow old HDDs and has a 2-core CPU, although I don't see why it would keep the main one from performing at its best.
Funnily enough, while the main one has "I/O wait" varying between 5% and 20%, the slower machine has it always around 0%-1%.
Code:
root@pve:~# pveperf
CPU BOGOMIPS: 54400.72
REGEX/SECOND: 2221915
HD SIZE: 187.58 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 336.98
DNS EXT: 47.06 ms
DNS INT: 60.54 ms (redacted.net)
Code:
root@pve:~# uname -a
Linux pve 5.13.19-6-pve #1 SMP PVE 5.13.19-14 (Thu, 10 Mar 2022 16:24:52 +0100) x86_64 GNU/Linux
Code:
root@pve:~# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:03:21 with 0 errors on Sun Apr 10 00:27:22 2022
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-KINGSTON_SUV400S37240G_50026B7675033560-part3 ONLINE 0 0 0
ata-SSD_240GB_YS202010331654AA-part3 ONLINE 0 0 0
errors: No known data errors
root@pve:~#
Code:
root@pve:~# pvesm status
Name Type Status Total Used Available %
local dir active 196689024 5828992 190860032 2.96%
local-zfs zfspool active 219670420 28810380 190860040 13.12%
root@pve:~#
Thank you again!
Last edited: