Slow speed of SSD in Proxmox with powerful Samsung 870 PRO NVMe M.2 (MLC)

Joogser

Member
Jun 30, 2019
25
4
23
39
Russian Federation
habr.com
Hello Colleagues.

I did install into my blade server new safe and quick ssd disk.
This is Samsung 870 PRO NVMe M.2 (MLC with 3NAND type of memory)

dis-jpg.10909


I did try test this disk at the same server with installed Windows Server 2016. I've got perfect data of this speed test, this was amazing impression and high hopes to this device.

2019-07-14_17-14-03-png.10910


wer-jpg.10933


So, I did install Proxmox 5.4-3 (without subscription) on my SataDOM. SSD NVMe disk has been formatted in NTFS file system and mounted to /media/nvme0n1. Then in Proxmox web interface I did add this SSD like new Directory:

1234-jpg.10914


I did install Microsoft Windows 10 LTSB with next properties of VM:

prop-jpg.10919


cpu-jpg.10917


In a time of Windows installation, for HDD showing, I did use drivers for HDD from virtio-win-0.1.171.iso.

Then on a finish, I did install CrystalDiskMark and made speed test, this results you can see below:

desk-jpg.10918


-------
I did try make another file system, it's EXT4 and set different types of disk like Virtio ISCSI, cache WriteBack and by Defaults (no cache). I didn't get more than 1500-1900 MB/s per Read & Write.
-------
Something going wrong!
Please, share your experience, how to get full powerful from this SSD?
 

Attachments

  • dis.jpg
    dis.jpg
    98.8 KB · Views: 439
  • 2019-07-14_17-14-03.png
    2019-07-14_17-14-03.png
    773.9 KB · Views: 442
  • 1234.jpg
    1234.jpg
    77.8 KB · Views: 445
  • cpu.jpg
    cpu.jpg
    45 KB · Views: 442
  • desk.jpg
    desk.jpg
    136.9 KB · Views: 445
  • prop.jpg
    prop.jpg
    34.2 KB · Views: 430
  • wer.jpg
    wer.jpg
    104.7 KB · Views: 429
Last edited:
I test on a Proxmox VE with a 970 evo - 1TB, formated as xfs and a 970 evo plus 1 TB formated as ext4.

My VM config:
Code:
# qm config 113
agent: 1
bootdisk: scsi0
cores: 6
cpu: host
ide0: none,media=cdrom
memory: 8192
name: win10
net0: virtio=CE:6F:78:25:40:46,bridge=vmbr0
numa: 0
ostype: win10
scsi0: sandisk-zfs:vm-113-disk-0,cache=writeback,size=32G
scsi1: sam970evoplus:113/vm-113-disk-0.raw,discard=on,size=10G,ssd=1
scsi2: sam970evo:113/vm-113-disk-0.raw,discard=on,size=7G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=2863de33-2886-4689-9da7-7137b3c62d2a
sockets: 1
vmgenid: 2e4c0c64-830b-4de9-b3ec-bdeda627553b

if you want to benchmark, do not use qcow2, just select raw.

See my results.
diskmark-win10-970ssd.png
 
I test on a Proxmox VE with a 970 evo - 1TB, formated as xfs and a 970 evo plus 1 TB formated as ext4.

My VM config:
Code:
# qm config 113
agent: 1
bootdisk: scsi0
cores: 6
cpu: host
ide0: none,media=cdrom
memory: 8192
name: win10
net0: virtio=CE:6F:78:25:40:46,bridge=vmbr0
numa: 0
ostype: win10
scsi0: sandisk-zfs:vm-113-disk-0,cache=writeback,size=32G
scsi1: sam970evoplus:113/vm-113-disk-0.raw,discard=on,size=10G,ssd=1
scsi2: sam970evo:113/vm-113-disk-0.raw,discard=on,size=7G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=2863de33-2886-4689-9da7-7137b3c62d2a
sockets: 1
vmgenid: 2e4c0c64-830b-4de9-b3ec-bdeda627553b

if you want to benchmark, do not use qcow2, just select raw.

See my results.
View attachment 10922

Okay, what about snapshots in this case? I was reading somewhere, if I will be need snapshots for VM, I must use qcow2, because snapshots don't supporting or just don't working with RAW format of HDD VM.
 
Okay, what about snapshots in this case? I was reading somewhere, if I will be need snapshots for VM, I must use qcow2, because snapshots don't supporting or just don't working with RAW format of HDD VM.

yes, no snapshots with raw. but you asked why you see such a difference compared to a plain win installation and I explained how to test the same in a VM setup.
 
yes, no snapshots with raw. but you asked why you see such a difference compared to a plain win installation and I explained how to test the same in a VM setup.

Thanks, it's perfect. If I want to use full performance of the SSD on the VM, I should set up RAW format in the HDD properties for this VM.
This need apply for any OS in Proxmox 5.4-1? I mean Linux, Windows.

Snapshots are not critical for my case, cron backup will be enough for my case.
 
If you want snapshots, you cannot use raw, go for qcow2 or better, buy two datacenter class nvme and go for ZFS.

Using cheap consumer/prosumer nvme will not make you happy in the long run, you get what you pay for.
 
If you want snapshots, you cannot use raw, go for qcow2 or better, buy two datacenter class nvme and go for ZFS.

Using cheap consumer/prosumer nvme will not make you happy in the long run, you get what you pay for.

Do you mean, if I will buy second nvme disk, I will able adjust ZFS, and into ZFS file system I will able use qcow2 format with snapshots supporting on a full performance of them powerful?

Or just refine, what exactly do you mean under datacenter class, this SSD drives (datacenter class) has special parameters for this support? What is difference between this two devices which may affect on snapshots supporting?
 
Last edited:
Do you mean, if I will buy second nvme, I will able adjust ZFS, and into ZFS I will able use qcow2 with snapshots supporting?

ZFS support snapshot and many other cool features.
See https://pve.proxmox.com/wiki/Storage
https://pve.proxmox.com/wiki/ZFS_on_Linux

Or just refine, what exactly do you mean under mean datacenter class, this SSD drives (datacenter class) has special parameters for this support? What is difference between this two devices which may affect on snapshots supporting?

970 Pro nvme is not designed to run in a server, it is designed for desktop or workstations. The snapshot capability is the same, this depends on how you configure it on Proxmox VE. The difference is performance and reliability in a server workload.
 
This is useful information, thanks.

I may adjust Soft RAID0 (strip) without large capacity lost and make ZFS file system. I guess it'll be ZFS over iSCSI.

970 Pro nvme is not designed to run in a server, it is designed for desktop or workstations. The difference is performance and reliability in a server workload.
Yes, it's true....
 

Attachments

  • nvme.jpg
    nvme.jpg
    227.4 KB · Views: 67
  • offtopic.jpg
    offtopic.jpg
    64.9 KB · Views: 31
Last edited:
  • I did add SSD like Storage (Directory) with NTFS file system.
  • I have made template for VM with recommended feature, I was setting up RAW format in HDD option.
ssd_speed_feature-jpg.10939


I did install Windows 10 LTSB, run CrystalDiskMark, SSD speed test results your can see below:

ssd_speed-jpg.10938
 

Attachments

  • ssd_speed.jpg
    ssd_speed.jpg
    213.6 KB · Views: 418
  • ssd_speed_feature.jpg
    ssd_speed_feature.jpg
    21.4 KB · Views: 411
Last edited:
I did install ZFS in soft raid0 (But a little bit doubt it's a raid0, just don't know ZFS nuances and comandlet of zpool yet, please correct me if I'm wrong).
  • I have made two same partition, it's a /dev/nvme0n1p1 & /dev/nvme0n1p2

    partitions-jpg.10943


  • Made zpool with a name RAIDZ0:
    Code:
    zpool create -f -o ashift=12 RAIDZ0 /dev/nvme0n1p1 /dev/nvme0n1p2

    zfspool-jpg.10944


    zpoollist-jpg.10947


  • Add this disk with ZFS file system like a storage to Proxmox server.

    instruction-jpg.10946
As Results:
speed_test_zfs-jpg.10942


Hmmm!
I can't choose another format for disk image, by default it's going RAW, anybody have idea, why this field is not active?
qcow-field-jpg.10945
 

Attachments

  • speed_test_ZFS.jpg
    speed_test_ZFS.jpg
    283.9 KB · Views: 405
  • partitions.jpg
    partitions.jpg
    82.1 KB · Views: 396
  • zfspool.jpg
    zfspool.jpg
    51.6 KB · Views: 394
  • qcow-field.jpg
    qcow-field.jpg
    82.7 KB · Views: 392
  • instruction.jpg
    instruction.jpg
    96.7 KB · Views: 389
  • zpoollist.jpg
    zpoollist.jpg
    17.2 KB · Views: 382
Last edited:
a good start to get Info's about the Proxmox VE storage model is the Help button at the bottom left corner in your screen
and:

https://pve.proxmox.com/wiki/Storage
https://www.youtube.com/watch?v=CVCtBw4yS-s (1-6)
https://pve.proxmox.com/wiki/ZFS_on_Linux
https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks
https://docs.oracle.com/cd/E23823_01/html/819-5461/gbciq.html => Creating and Destroying ZFS Snapshots
and so on

You know, that your RAIDZ0 is the opposite of an "Redundant Array"?
The "0" tells you, that there is no mirroring or parity there only is striping....

regards,
maxprox
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!