Question on NVME issues

ihsteven

New Member
Mar 1, 2021
1
0
1
43
We're doing some testing on a samsung NVME drive and having a very hard time trying to figure out why we're not getting better speed on a VM within the same host node that we appear to be getting solid speed test results on. We're using the same hardware to conduct the testing. We've tested it vs vmware and within proxmox itself . Even prox to prox (host node and VM differences) are huge..

MOBO: Supermicro X10DRD-iNT
CPU: 2 x E5-2620 V3
NVMe: Single Samsung 970 Evo Plus capable of 3,500/3,300 MB/s installed on PCI-E x4 slot on mobo...
Proxmox Version: 6.3-3

On the hostnode itself:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.12
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][75.0%][r=690MiB/s,w=231MiB/s][r=177k,w=59.1k IOPS][eta 00mJobs: 1 (f=1): [m(1)][100.0%][r=691MiB/s,w=232MiB/s][r=177k,w=59.3k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=6155: Mon Mar 1 10:43:10 2021
read: IOPS=176k, BW=689MiB/s (723MB/s)(3070MiB/4453msec)
bw ( KiB/s): min=700544, max=710048, per=100.00%, avg=706399.00, stdev=3653.59, samples=8
iops : min=175136, max=177512, avg=176599.75, stdev=913.40, samples=8
write: IOPS=58.0k, BW=230MiB/s (242MB/s)(1026MiB/4453msec); 0 zone resets
bw ( KiB/s): min=233384, max=239848, per=100.00%, avg=236354.00, stdev=2463.09, samples=8
iops : min=58346, max=59962, avg=59088.50, stdev=615.77, samples=8
cpu : usr=35.71%, sys=62.04%, ctx=5056, majf=0, minf=10
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=689MiB/s (723MB/s), 689MiB/s-689MiB/s (723MB/s-723MB/s), io=3070MiB (3219MB), run=4453-4453msec
WRITE: bw=230MiB/s (242MB/s), 230MiB/s-230MiB/s (242MB/s-242MB/s), io=1026MiB (1076MB), run=4453-4453msec

Disk stats (read/write):
nvme0n1: ios=775953/259335, merge=0/0, ticks=150220/1918, in_queue=456, util=97.84%


But when running the same command on the VM we get this:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [m(1)][18.8%][r=203MiB/s,w=67.6MiB/s][r=52.0k,w=17.3k IOPS][eta 0Jobs: 1 (f=1): [m(1)][20.0%][r=187MiB/s,w=62.4MiB/s][r=47.8k,w=15.0k IOPS][eta 0Jobs: 1 (f=1): [m(1)][26.7%][r=189MiB/s,w=63.7MiB/s][r=48.4k,w=16.3k IOPS][eta 0Jobs: 1 (f=1): [m(1)][33.3%][r=195MiB/s,w=64.0MiB/s][r=49.8k,w=16.6k IOPS][eta 0Jobs: 1 (f=1): [m(1)][40.0%][r=196MiB/s,w=66.1MiB/s][r=50.1k,w=16.9k IOPS][eta 0Jobs: 1 (f=1): [m(1)][46.7%][r=198MiB/s,w=65.9MiB/s][r=50.7k,w=16.9k IOPS][eta 0Jobs: 1 (f=1): [m(1)][53.3%][r=211MiB/s,w=71.6MiB/s][r=54.0k,w=18.3k IOPS][eta 0Jobs: 1 (f=1): [m(1)][60.0%][r=218MiB/s,w=72.8MiB/s][r=55.8k,w=18.6k IOPS][eta 0Jobs: 1 (f=1): [m(1)][66.7%][r=210MiB/s,w=69.4MiB/s][r=53.8k,w=17.8k IOPS][eta 0Jobs: 1 (f=1): [m(1)][73.3%][r=212MiB/s,w=71.2MiB/s][r=54.4k,w=18.2k IOPS][eta 0Jobs: 1 (f=1): [m(1)][80.0%][r=215MiB/s,w=73.3MiB/s][r=55.1k,w=18.8k IOPS][eta 0Jobs: 1 (f=1): [m(1)][87.5%][r=211MiB/s,w=69.8MiB/s][r=53.0k,w=17.9k IOPS][eta 0Jobs: 1 (f=1): [m(1)][93.8%][r=208MiB/s,w=68.8MiB/s][r=53.3k,w=17.6k IOPS][eta 0Jobs: 1 (f=1): [m(1)][100.0%][r=211MiB/s,w=69.9MiB/s][r=54.1k,w=17.9k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=10987: Mon Mar 1 12:45:21 2021
read: IOPS=52.6k, BW=206MiB/s (216MB/s)(3070MiB/14937msec)
bw ( KiB/s): min=191040, max=230160, per=99.83%, avg=210104.55, stdev=12861.96, samples=29
iops : min=47760, max=57540, avg=52526.14, stdev=3215.48, samples=29
write: IOPS=17.6k, BW=68.7MiB/s (72.0MB/s)(1026MiB/14937msec)
bw ( KiB/s): min=63560, max=76392, per=99.86%, avg=70240.00, stdev=4240.09, samples=29
iops : min=15890, max=19098, avg=17560.00, stdev=1060.02, samples=29
cpu : usr=23.14%, sys=76.57%, ctx=216, majf=0, minf=28
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=206MiB/s (216MB/s), 206MiB/s-206MiB/s (216MB/s-216MB/s), io=3070MiB (3219MB), run=14937-14937msec
WRITE: bw=68.7MiB/s (72.0MB/s), 68.7MiB/s-68.7MiB/s (72.0MB/s-72.0MB/s), io=1026MiB (1076MB), run=14937-14937msec

Disk stats (read/write):
sda: ios=783716/261944, merge=0/2, ticks=247599/47111, in_queue=294468, util=99.35%

We are at a complete loss as we've tried to change the SCSI controllers, basically everything. We're getting 1/3 of the host nodes capabilities, so something seems off here...

Thoughts and help would be greatly appreciated.

-Steven
 
Last edited:
What are the blocksizes you are using? You will always loose performance because of the virtualization and padding overhead caused by mixed blocksizes. The virtio SCSI for example is using 512B.
 
What are the blocksizes you are using? You will always loose performance because of the virtualization and padding overhead caused by mixed blocksizes. The virtio SCSI for example is using 512B.
That's quite a bit performance loss. Is this expected?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!