NVMe Speed slow

PServer

Member
Mar 14, 2020
34
0
6
44
Hi,

I install proxmox 6.3 on server HP G8
all software updated after install proxmox.
1) I purchase Samsung NVMe Evo 960 Plus and install on this server.
but read and write speed in vm is 800MB.
But I install vmware Esxi on this server, read write more than 1500MB.

2) VM network speed in proxmox is 500Mb/s but if I install vmware speed on VM is 1000Mb/s (test with www.speedtest.net) (network card with E1000 and virtio tested)

Can you help me for resolve this?

VM Info: (I have only one VM on server)
Code:
boot: cd
bootdisk: sata0
cores: 4
cpu: host
cpuunits: 1000
memory: 2048
name: test-970
net0: e1000=00:16:3e:27:0b:36,bridge=vmbr0
numa: 0
onboot: 1
sata0: Server140-1TB-NVMe-Plus:vm-1421-dik9zai9amf43fzz-dwhqgseitlsap4au,backup=0,size=55G,ssd=1
sata1: local-lvm:vm-1421-disk-0,size=10G,ssd=1
smbios1: uuid=cb0f4495-0fda-4996-ad9f-767b5a7f2ca9
sockets: 2
vmgenid: aa462941-c59c-4139-9c1c-07f3200ab4f7

Proxmox Info:
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.78-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-2
pve-kernel-helper: 6.3-2
pve-kernel-5.4.78-1-pve: 5.4.78-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-1
libpve-common-perl: 6.3-1
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
Hi,

I resolve network speed with change Server Power profile to High.
But NVMe speed not solved.
Do you have solution?
I install windows/vmware/centos on this server NVMe speed is 3500 but with proxmox speed is 800MB.

Thanks
 
Are you testing the NVME speed inside a VM or on the PVE server itself?

If you test it inside the VM, try to change the disk bus type to SCSI with the Virtio SCSI controller (default).

If you have more than one disk in a VM you can enable IO Thread for that disk (in the advanced section of the disk edit window) and set the SCSI controller to "VirtIO SCSI (single)".

This can help to improve the performance inside a VM.
 
  • Like
Reactions: PServer
Hi,

Thanks for your replay.
I test on VM and PVE Server with this command:
Code:
dd if=/dev/zero of=/dev/sda bs=2G count=2 oflag=dsync
and test in VM with CrystalDiskMark
In PVE server and VM speed is same.

** I set nvme_core.default_ps_max_latency_us=0 and APST has been off,But still speed is 800MB
Are you testing the NVME speed inside a VM or on the PVE server itself?

If you test it inside the VM, try to change the disk bus type to SCSI with the Virtio SCSI controller (default).

If you have more than one disk in a VM you can enable IO Thread for that disk (in the advanced section of the disk edit window) and set the SCSI controller to "VirtIO SCSI (single)".

This can help to improve the performance inside a VM.
 
dd if=/dev/zero of=/dev/sda bs=2G count=2 oflag=dsync
That is not a good way of benchmarking. Have a look at the Benchmarking Storage wiki page and take a look at the fio commands.

Benchmarking with different block sizes gives you different metrics. If you select a small block size (e.g. 4k) you benchmark how many IOPS are possible, bandwidth will be low. If you use a large block size (e.g. 1M or 4M) you benchmark bandwidth, IOPS will be low.

With that you will have kind of the worst case scenario for each limiting factor (bandwidth and IOPS).

One last note, setting the runtime higher than in those examples should help to reduce the impact any caches might have, let the benchmarks run for a few minutes. Setting the runtime to 600 will let the benchmark run for 10 minutes.
 
  • Like
Reactions: PServer
That is not a good way of benchmarking. Have a look at the Benchmarking Storage wiki page and take a look at the fio commands.

Benchmarking with different block sizes gives you different metrics. If you select a small block size (e.g. 4k) you benchmark how many IOPS are possible, bandwidth will be low. If you use a large block size (e.g. 1M or 4M) you benchmark bandwidth, IOPS will be low.

With that you will have kind of the worst case scenario for each limiting factor (bandwidth and IOPS).

One last note, setting the runtime higher than in those examples should help to reduce the impact any caches might have, let the benchmarks run for a few minutes. Setting the runtime to 600 will let the benchmark run for 10 minutes.
Thanks for your replay.
I test in PVE server with this command:
Code:
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=1M --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/nvme0n1

Result=
Code:
Run status group 0 (all jobs):
   READ: bw=2817MiB/s (2954MB/s), 2817MiB/s-2817MiB/s (2954MB/s-2954MB/s), io=165GiB (177GB), run=60001-60001msec

But still speed in VM is 800MB (with crystaldiskmark software)
CrystalDiskMark in vmware esxi VM have 3500MB speed.

Thanks,
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!