[SOLVED] going for max speed with proxmox 7; how to do it?

Dunuin

Famous Member
Jun 30, 2020
3,111
664
113
Germany
They are Intel DC S3700 100GB SATA (SSDSC2BA100G3) from 2012. Writes up to 200MB/s reads up to 500MB/s.
Bought 13 of them in different sizes.
 
Last edited:

Dunuin

Famous Member
Jun 30, 2020
3,111
664
113
Germany
If you want a really fast SSD you should still look for NVMe SSDs and not slow SATA. Just wanted to show that a cheap 12$ SATA enterprise SSD beats your 6 M.2 SSDs for 1100$ for server workloads like sync IOPS and is still more durable and will live longer.

A modern fast equivalent would be for example the Intel D7-P5600 in U.2 format. But these aren't cheap.

Samsung also got enterprise U.2 drives like the PM9A3 or the PM1733.

Enterprise SSDs usually use U.2 as connector (and a 2,5" form factor) instead of M.2 because M.2 is just a very bad form factor made for laptops with just not enough space to place components like additional spare NAND chips to increase the life expectation or capacitators for the powerloss protection. But U.2 also uses NVMe so its possible to buy M.2 to U.2 adapters:

u.2 to m.2.jpg
 
Last edited:

diversity

Member
Feb 19, 2020
118
4
18
51
results for a 6 way mirror are in;

Code:
zpool status
  pool: rpool
 state: ONLINE
config:

        NAME                                                 STATE     READ WRITE CKSUM
        rpool                                                ONLINE       0     0     0
          mirror-0                                           ONLINE       0     0     0
            nvme-eui.002538b9015063be-part3                  ONLINE       0     0     0
            nvme-eui.002538b9015063e1-part3                  ONLINE       0     0     0
            nvme-eui.e8238fa6bf530001001b448b492a6acc-part3  ONLINE       0     0     0
            nvme-eui.e8238fa6bf530001001b448b49df8c91-part3  ONLINE       0     0     0
            nvme-eui.e8238fa6bf530001001b448b49df8f93-part3  ONLINE       0     0     0
            nvme-eui.e8238fa6bf530001001b448b49df19b5-part3  ONLINE       0     0     0

errors: No known data errors
root@pve-trx:~# pveperf
CPU BOGOMIPS:      742467.84
REGEX/SECOND:      3958451
HD SIZE:           899.00 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     523.78

the abysmal speed could be due to the fact 2 of the disks are in the slow m.2 mobo slots and the rest is being bogged down to match that speed?
 

diversity

Member
Feb 19, 2020
118
4
18
51
currently I got this;

Code:
pveperf
CPU BOGOMIPS:      742462.72
REGEX/SECOND:      4060522
HD SIZE:           899.00 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     532.78

OS running on 2 samsung 980 pro ssd's in the slow m.2 sockets on the mobo. Not sure why I am getting double the speed now as before.

Code:
zpool status
  pool: rpool
 state: ONLINE
config:

        NAME                                 STATE     READ WRITE CKSUM
        rpool                                ONLINE       0     0     0
          mirror-0                           ONLINE       0     0     0
            nvme-eui.002538b9015063be-part3  ONLINE       0     0     0
            nvme-eui.002538b9015063e1-part3  ONLINE       0     0     0

errors: No known data errors

  pool: vmpool
 state: ONLINE
config:

        NAME                                      STATE     READ WRITE CKSUM
        vmpool                                    ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            nvme-WDS100T1X0E-00AFY0_204540802523  ONLINE       0     0     0
            nvme-WDS100T1X0E-00AFY0_204540802590  ONLINE       0     0     0
          mirror-1                                ONLINE       0     0     0
            nvme-WDS100T1X0E-00AFY0_20465F800961  ONLINE       0     0     0
            nvme-WDS100T1X0E-00AFY0_204540802025  ONLINE       0     0     0

as pveperf only checks the speed of the rpool how can I run something similar on vmpool?
 

wigor

Member
Dec 5, 2019
40
5
8
you can add the path.

Code:
root@pve13:~# man pveperf
PVEPERF(1)                                                                                             Proxmox VE Documentation                                                                                             PVEPERF(1)

NAME
       pveperf - the Proxmox benchmark

SYNOPSIS
       pveperf [PATH]
 

Dunuin

Famous Member
Jun 30, 2020
3,111
664
113
Germany
A 6way mirror gives you 6x read speed and only 1x write speed. 3 striped mirrors give you 3x write speed and 6x read speed.

You can also try fio for such stuff. Something like this:

Code:
fio --ioengine=psync --filename=/Pool/test.file --size=10G --time_based --name=psync_random_write --runtime=600 --direct=1 --sync=1 --iodepth=1 --rw=randwrite --bs=4k --numjobs=1
 
  • Like
Reactions: diversity

diversity

Member
Feb 19, 2020
118
4
18
51
!! WOW !!

Code:
pveperf /vmpool
CPU BOGOMIPS:      742462.72
REGEX/SECOND:      4013371
HD SIZE:           1725.80 GB (vmpool)
FSYNCS/SECOND:     3501.27

3501.27 sweeeeeeeeeeeeeeeeeeeeeeet
 

diversity

Member
Feb 19, 2020
118
4
18
51
thanks all for the in depth explanations, patience and what not.

I am extactic at the moment.

I am considering getting an additional 2 wd blacks for an extra striped mirror. if you guys think i could even get more speed with that
 

diversity

Member
Feb 19, 2020
118
4
18
51
I'm just wondering.... have you tested and got GPU passthrough running successfully on multiple VM's before?
@bobmc , can you please elaborate? i.e. Did you mean SR-IOV? Multiple VM's running on the same host with the same card passed through but not turned on at the same time? Or more in general had multiple successes over several proxmox setups?
 

Dunuin

Famous Member
Jun 30, 2020
3,111
664
113
Germany
No Geforce can do SR-IOV so I think he just means one GPU for each VM.
Didn't tried it myself yet but I have heard that the Geforce driver might lock the GPU if it finds multiple GPUs that are not controlled by the same OS so that you can't use cheap Geforce cards for big virtualization servers and need to buy Tesla cards instead.
 

diversity

Member
Feb 19, 2020
118
4
18
51
I can confirm the following;

GPU passthough with the gear I have using the underlying virtualization stack of proxomx (I believe it is kvm/qemu) used to be non trivial.
With the latest kernel 5.11 and what not gotten by using proxmox 7 things got a whole lot easier.

Also contrary to the documentation;
https://pve.proxmox.com/wiki/Pci_passthrough
NOTE: A PCI device can only ever be attached to a single VM.
One can have multiple VM's make use of the same PCIe device. Just don't try to run those vm's at the same time.

Also I can confirm multiple GPU's to a VM. Although I am now, using the latest version of proxmox fully updated, running into issues with adding more than 2. I strongly believe these are VM issues though Previously, a long time ago with an older proxmox, I have had a 4 GPU setup running. But that was before I went for disk speed rather than number crunching. grrr f*&k it. I might be forgetting that I used to have a windows 10 machine also. that could also be when I had 4 GPU's running. sorry for my brittle state of memory.
 
Last edited:

Dunuin

Famous Member
Jun 30, 2020
3,111
664
113
Germany
One can have multiple VM's make use of the same PCIe device. Just don't try to run those vm's at the same time.
I also got a single GT710 that two Win10 VMs are using. Of cause only one VM at a time. Also thought about buying a second GT710 but I got no empty PCIe slots and my RAM is also always full so it would be hard to run both VMs at the same time because Win10 is so ressource hungry and a passthrough will pin the complete RAM.
But one really annoing thing with sharing a VM is that automated backups won't work. If one VM is running and the backup task starts, then the VM that wasn't running can't be backuped because the GPU is already in use and PVE need to shortly start the VM to take that backup.
 
Last edited:

diversity

Member
Feb 19, 2020
118
4
18
51
I also got a single GT710 that two Win10 VMs are using. Of cause only one VM at a time. Also thought about buying a second GT710 but I got no empty PCIe slots and my RAM is also always full so it would be hard to run both VMs at the same time because Win10 is so ressource hungry and a passthrough will pin the complete RAM.
But one really annoing thing with sharing a VM is that automated backups won't work. If one VM is running and the backup task starts, then the VM that wasn't running can't be backuped because the GPU is already in use and PVE need to shortly start the VM to take that backup.
you can try taming your windows commit memory hungry VMs by using
https://github.com/lowleveldesign/process-governor

I am running into memory issues on a another proxmox setup I have and will try that as soon as time will allow. Currently far too busy with stuff, this forum being one of them ;)
 
Last edited:

Dunuin

Famous Member
Jun 30, 2020
3,111
664
113
Germany
Windows just got way too much bloatware (atleast that is what call all the microsoft services and programs that I don't want but windows always installs again with each big update).
My Debian works fine with 512MB-1GB RAM. For Win10 I need atleast 5 GB or it gets way to slow.
 

diversity

Member
Feb 19, 2020
118
4
18
51
one should indeed never run windows 10, a realization I had many months ago.
one coulld run windows server 2019+ whiich does not do crapware that I am aware of.

But even the server is not free from calling home/telemetry so one could run
https://www.oo-software.com/en/shutup10
or something similar to shut down the chattiness.

I for one am loving linux.
But never let it be said that there are no reasons for running windows.
 
Last edited:

diversity

Member
Feb 19, 2020
118
4
18
51
at the risk of going greatly off topic on a thread that has been solved already. I do want to react as you have been such a great help to me so returning the fafor is the least I can do.

Windows server 2019 can run fine with 2 cores and 2 GB. The bare bones (including desktop and audio) install does not even need the 32 GB disk space suggested. My clean, secure, and silenced win 2k19 template is 12GB and has 2 cores/threads and 2 GB memory.

Now depending on the use case one will probably need to add disk space / memory / cpu's.
All that I need to figure out yet is how to get the greedy memory allocation to tone down a notch or two. I am considering process govenr later on in time.
 
Last edited:

diversity

Member
Feb 19, 2020
118
4
18
51
But one really annoing thing with sharing a VM is that automated backups won't work. If one VM is running and the backup task starts, then the VM that wasn't running can't be backuped because the GPU is already in use and PVE need to shortly start the VM to take that backup.
Perhaps automated snapshots can help if one have automated zfs send to a backup location
 

Dunuin

Famous Member
Jun 30, 2020
3,111
664
113
Germany
Perhaps automated snapshots can help if one have automated zfs send to a backup location
I stopped using ZFS snapshots because they consume more space than the same amount of PBS backups. And snapshots cant replace a backup, so everything would consume atleast double the space.
 
  • Like
Reactions: diversity

diversity

Member
Feb 19, 2020
118
4
18
51
Adding more than 2 GPU's is working now. I can't really be sure what I did wrong earlier but I really believe it was a configuration issue inside the VM it self.
1627730259650.png

1627730286398.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!