Hi,
I just deployed a Ryzen 5600 based server with 64GB DDR4 ECC 3200MHz RAM and two Samsung PM9A3 U.2 960GB Datacenter NVMes.
The NVMes are connected using a PCIe to SFF 8643 (U.2) adapter.
When running a benchmark script (yabs.sh), I noticed that the NVMe performance was much slower than expected.
Especially considering those are enterprise NVMes.
Just for reference, this is a similar System with a 5800X, 128GB RAM and 2x PM9A3 1.92TB.
Anything I'm missing here?
I just deployed a Ryzen 5600 based server with 64GB DDR4 ECC 3200MHz RAM and two Samsung PM9A3 U.2 960GB Datacenter NVMes.
The NVMes are connected using a PCIe to SFF 8643 (U.2) adapter.
When running a benchmark script (yabs.sh), I noticed that the NVMe performance was much slower than expected.
Especially considering those are enterprise NVMes.
Code:
fio Disk Speed Tests (Mixed R/W 50/50) (Partition rpool/ROOT/pve-1):
---------------------------------
Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 26.57 MB/s (6.6k) | 467.19 MB/s (7.2k)
Write | 26.60 MB/s (6.6k) | 469.65 MB/s (7.3k)
Total | 53.17 MB/s (13.2k) | 936.85 MB/s (14.6k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 1.37 GB/s (2.6k) | 781.20 MB/s (762)
Write | 1.45 GB/s (2.8k) | 833.23 MB/s (813)
Total | 2.82 GB/s (5.5k) | 1.61 GB/s (1.5k)
Just for reference, this is a similar System with a 5800X, 128GB RAM and 2x PM9A3 1.92TB.
Code:
fio Disk Speed Tests (Mixed R/W 50/50) (Partition rpool/ROOT/pve-1):
---------------------------------
Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 336.44 MB/s (84.1k) | 2.45 GB/s (38.3k)
Write | 337.33 MB/s (84.3k) | 2.46 GB/s (38.5k)
Total | 673.78 MB/s (168.4k) | 4.92 GB/s (76.9k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 2.81 GB/s (5.4k) | 2.98 GB/s (2.9k)
Write | 2.96 GB/s (5.7k) | 3.18 GB/s (3.1k)
Total | 5.77 GB/s (11.2k) | 6.16 GB/s (6.0k)
Anything I'm missing here?
Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.13-3-pve)
pve-manager: 8.1.5 (running version: 8.1.5/60e01c6ac2325b3f)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5.13-3-pve-signed: 6.5.13-3
proxmox-kernel-6.5: 6.5.13-3
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.2
libpve-apiclient-perl: 3.3.1
libpve-cluster-api-perl: 8.0.5
libpve-cluster-perl: 8.0.5
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.0
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.4
pve-cluster: 8.0.5
pve-container: 5.0.9
pve-docs: 8.1.4
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.1.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve1