proxmox performance

mmx64

New Member
Jun 30, 2021
5
0
1
46
Hello, first i want to say, im new to proxmox.

I have proxmox installed a ryzen 3900x 16g ram ddr 4 3600MHX CL 16, 512GB m2 SATA ssd, GTX1060 6GB and as guest VM i have install popos 20.10 win7 and win 10

Configs pop os:
bios: ovmf
boot: order=scsi0;net0
cores: 4
cpu: max
efidisk0: local-lvm:vm-101-disk-1,size=4M
memory: 4096
name: popos-gpu
net0: e1000=FA:85:3C:8C:D3:0D,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-101-disk-0,cache=writeback,size=60G
scsihw: virtio-scsi-single
smbios1: uuid=355a3f8b-621a-4fb3-b3ed-7b90043a18ba
sockets: 1
vmgenid: 61f5bf5d-d8ae-4d94-931e-14010aeeedfe

Configs win7:

agent: 1
boot: order=sata1
cores: 2
cpu: host,hidden=1
machine: pc-q35-5.1
memory: 3000
name: win7
net1: e1000=1A:89:FD:4B:42:3B,bridge=vmbr0,firewall=1
numa: 0
ostype: win7
sata1: local-lvm:vm-103-disk-1,size=32G
sata2: local:iso/virtio-win-0.1.185.iso,media=cdrom,size=402812K
smbios1: uuid=981fe65d-078f-4e05-9659-ef947055ce57
sockets: 1
usb0: host=0471:485d
vga: virtio
vmgenid: cab0f68a-4373-43fb-975f-93c93533e52b


Configs win10:

agent: 1
balloon: 3000
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 8
cpuunits: 4096
efidisk0: local-lvm:vm-100-disk-1,size=4M
hostpci0: 0000:2b:00,pcie=1,x-vga=1
ide2: local:iso/Windows.iso,media=cdrom
kvm: 1
machine: pc-q35-5.2
memory: 4096
name: win10
net0: virtio=6A:74:C3:03:1A:09,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
sata0: local:iso/virtio-win-0.1.185.iso,media=cdrom,size=402812K
scsi0: local-lvm:vm-100-disk-0,size=60G
scsihw: virtio-scsi-pci
smbios1: uuid=dfdc0750-442c-409a-91a1-3a18e2f8ec98
sockets: 1
usb0: host=045e:0719,usb3=1
vmgenid: b7748343-2b99-431b-a8ea-e32c2f2c1551


As i said im new to proxmox and i dont know if this is normal or not.
So im experiencing really slow performace of popos (i didnt testes windows performances)
I use pop os as a nginx php slq server ruinning a webapp.

First thing i noticed that if i attach gpu to popos my webapps performance slows in half, for example a script that runs 8s, with gpu attached runs 16s

If pop os vm is the only one running runs decent (5-8s), yet nothing near this pc bare metal (around 4-5s) running same pop os with exact same webapp.

Now if i start any other vm, or vm install is in progress, that script execution went up to 60-150s
All vm are installed on same sdd as proxmox.

Anyone has some idea what can be the problem here? Thanks!

[Edit]
i want to add, i have installed another popos vm running same app and running simulataious in proxmox, and my accessing them separately script finishes in 7-8s,
But if i send the request to both VM.s at the same time i have one vm finsihes in 24.994221925735 and other in
14.608044147491
I had check and IO delay goes up to 34%
 
Last edited:
Hello, first i want to say, im new to proxmox.

I have proxmox installed a ryzen 3900x 16g ram ddr 4 3600MHX CL 16, 512GB m2 SATA ssd, GTX1060 6GB and as guest VM i have install popos 20.10 win7 and win 10

Configs pop os:
bios: ovmf
boot: order=scsi0;net0
cores: 4
cpu: max
efidisk0: local-lvm:vm-101-disk-1,size=4M
memory: 4096
name: popos-gpu
net0: e1000=FA:85:3C:8C:D3:0D,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-101-disk-0,cache=writeback,size=60G
scsihw: virtio-scsi-single
smbios1: uuid=355a3f8b-621a-4fb3-b3ed-7b90043a18ba
sockets: 1
vmgenid: 61f5bf5d-d8ae-4d94-931e-14010aeeedfe
E1000 is slow. You might want to switch to virtio as NIC.
You might want to switch your CPU type from the default "kvm64" to "host" for best performance if you don't have multiple nodes and need to migrate between them.
Configs win7:

agent: 1
boot: order=sata1
cores: 2
cpu: host,hidden=1
machine: pc-q35-5.1
memory: 3000
name: win7
net1: e1000=1A:89:FD:4B:42:3B,bridge=vmbr0,firewall=1
numa: 0
ostype: win7
sata1: local-lvm:vm-103-disk-1,size=32G
sata2: local:iso/virtio-win-0.1.185.iso,media=cdrom,size=402812K
smbios1: uuid=981fe65d-078f-4e05-9659-ef947055ce57
sockets: 1
usb0: host=0471:485d
vga: virtio
vmgenid: cab0f68a-4373-43fb-975f-93c93533e52b
Here you set the CPU type already to "host" but you are still using the E1000 NIC. And you are using virtio SCSI (which is good) but your HDD is set to "sata" instead of "SCSI".
Configs win10:

agent: 1
balloon: 3000
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 8
cpuunits: 4096
efidisk0: local-lvm:vm-100-disk-1,size=4M
hostpci0: 0000:2b:00,pcie=1,x-vga=1
ide2: local:iso/Windows.iso,media=cdrom
kvm: 1
machine: pc-q35-5.2
memory: 4096
name: win10
net0: virtio=6A:74:C3:03:1A:09,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
sata0: local:iso/virtio-win-0.1.185.iso,media=cdrom,size=402812K
scsi0: local-lvm:vm-100-disk-0,size=60G
scsihw: virtio-scsi-pci
smbios1: uuid=dfdc0750-442c-409a-91a1-3a18e2f8ec98
sockets: 1
usb0: host=045e:0719,usb3=1
vmgenid: b7748343-2b99-431b-a8ea-e32c2f2c1551
Here you are already using the virtio NIC and SCSI as HDD but the CPU type isn't set to "host".

What SSD model are you using? Consumer SSDs are really crappy at sync writes, random 4K writes and unparallelized IOPS. Enterprise/datacenter SSDs can reach IOPS magnitudes higher for such workloads. So its not unusual to see high IO delay even on a NVMe drive if running server workloads like DBs and so on.
 
Last edited:
Hello, thanks for reply,
I found out that inter e1000 works way better then virtio nic on my system, i have constant 900-1gb transfer, while virtio nic was fluctuating between 300-800 and never reaches 1gb.

Win7 machine was a virtualbox disk image and i had to change disk to sata for it to start.

I have a consumer Plextor SATA ssd, i now its not the fastest. So you think my problems can come from SSD?

In win 10 and 7 i dont really need high performance they will have small workloads, but the linux VM i need to deliver the best possible performance.
 
I have a consumer Plextor SATA ssd, i now its not the fastest. So you think my problems can come from SSD?
You didn't showed us any benchmarks or graphs of the hosts hardware utilization. If the SSD is the problem you should got a high IO delay on the host.
 
I did some more tests, i attach some screenshots and try to explain

win10.png

This is win10 VM running along 2 vm with popos installed.

proxmox.png

And proxmox graphs, as you see when memory usage is around 11G windows 10 is running, else only 2 pops os VM runs (this VM have the same appl installed and they are triggrtrd at the same time dooing same operation). When win 10 is on i have high IO delay even when win10 machine was practically idle (the very last portion of the graph)

If win10 is turn off, i have no io delay and one of the VM performs task in around 6s the other in 8s. and depending on what vm i sent the first request it finishes faster.

Turning off one of the Linux VM but leaving win10 running the task is finished in 7-8s

I have installed another 256 sata ssd, and will move one vm to it and test .


[Update]
I have installed the other ssd so
SSD-a has proxmox install and 1 linux vm and win 10 vm
SSD-b has win 7 and linux vm

I start both linux vm, run the script and i have
SSD -> a 7.4846940040588
SSD -> b 7.4782071113586
I start any of the windows 7 vm and the script executes in:
SSD -> a 17.335093975067
SSD -> b 14.712794065475
 
Last edited:
You didn't showed us any benchmarks or graphs of the hosts hardware utilization. If the SSD is the problem you should got a high IO delay on the host.

root@pve:~# pveperf
CPU BOGOMIPS: 182406.24
REGEX/SECOND: 4124798
HD SIZE: 93.99 GB (/dev/mapper/pve-root)
BUFFERED READS: 477.62 MB/sec
AVERAGE SEEK TIME: 0.13 ms
FSYNCS/SECOND: 741.06
DNS EXT: 34.38 ms
DNS INT: 0.66 ms (lan)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!