Input lag using NoVNC Console

Exio

Member
Mar 30, 2022
55
7
13
Hello,

For a long time now when I access my Windows 10 VM (and all the other ones like MAC, Ubuntu, Windows 11...), I have had tremendous input lag, even frustrating me. However, when I access that VM via RDP, the input lag is normal or there is none. I have already tried changing the CPU to Host as other posts suggested years ago but this does not solve what is happening to me.

The VM has 8GB RAM and 2 cpus x4 threads assigned and I think it is more than enough to run that machine without lag.

I have a graphics card but I have never known how to make it work in Proxmox. For example, assigning X resources of that graphics card to a VM.

I have configured the VM like this:

1712666817917.png

The CPU x86-64-v2 and q35-6.2 option is the one which i have notice less lag.

Any suggestions on how I can reduce or eliminate the input lag of the NoVNC console?

If anyone wants the output of a command, I will be very attentive to the topic to respond as soon as possible.

Thank you
 
Last edited:
SPICE is not an option for you? Performs way better than VNC and also offers way more features. Also comes integrated with PVE but client software is required.
 
SPICE is not an option for you? Performs way better than VNC and also offers way more features. Also comes integrated with PVE but client software is required.
I tried that option some time ago but the perfomance was similar.

Anyways i cant access Spice:
1712678527285.png

I consider that novnc is more efficient because I don't have to install anything, I just have to double click on the machine and that's it.

Is there a way to change the default NoVNC to Spice when you double click on the virtual machine?

If there is no way to achieve this, Spice would not be an option.
Anyway, I want to test the current performance of Spice, I'm going to see how I can fix what's in the previous image, I'll try Spice and see if it's worth it.

EDIT: I solve the Spice being locked changing the Screen option.
 
Last edited:
Is there a way to change the default NoVNC to Spice when you double click on the virtual machine?
When you tell that VM to use SPICE and you double click that VM it will start a SPICE session using the Virt-Viewer in case it is installed on your client. Only NoVNC supports to be run in the browser without any client.

If you want RDP in a browser you could set up a Guacamole VM and install an RDP server in each VM.

And no VM will feel snappy without a passthroughed GPU. Without it you are basically back in the 80s without any graphic acceleration and everything rendered by the CPU.

Best remote desktop latency and image quality you get with something like Parsec that supports h.264/h.265 GPU accelerated encoding of the video stream sent to your client machine.
 
Last edited:
When you tell that VM to use SPICE and you double click that VM it will start a SPICE session using the Virt-Viewer in case it is installed on your client. Only NoVNC supports to be run in the browser without any client.
Thats not happening to me. I have configured SPICE 128MB and when i double click NoVNC still opening automatically. Anyways, when i choose SPICE manually, i got error on the client:
1712680153911.png

"Cant connect to the graphic server"

And no VM will feel snappy without a passthroughed GPU. Without it you are basically back in the 80s without any graphic acceleration and everything rendered by the CPU.
How do i do that? I have a installed a NVIDIA GPU but i dont know how to "activate" it
 
Last edited:
How do i do that? I have a installed a NVIDIA GPU but i dont know how to "activate" it
You would also need one GPU (and not the cheaptest or low-power ones as those got GPU accelerated video encoded disabled...so something like a GT710 or GT1030 won't work) for each VM that should have a snappy desktop unless you pay for some expensive enterprise solutions.

Thats not happening to me. I have configured SPICE 128MB and when i double click NoVNC still opening automatically. Anyways, when i choose SPICE manually, i got error on the client:
1712680153911.png


"Cant connect to the graphic server"
You will habe to open that vv-file within a few seconds or it becomes invalid. Best you tell your browser to not download that file but to open in directly using the virt-viewer.
 
Last edited:
You would also need one GPU (and not the cheaptest or low-power ones as those got GPU accelerated video encoded disabled...so something like a GT710 or GT1030 won't work) for each VM that should have a snappy desktop unless you pay for some expensive enterprise solutions.
As you can see, i have the Tesla K20Xm so i think it could work for my needs, is that right?

1712734400053.png

You will habe to open that vv-file within a few seconds or it becomes invalid. Best you tell your browser to not download that file but to open in directly using the virt-viewer.
I open it as i see it downloaded, It probably won't take more than 2 seconds until I open the file. Anyways, still not working, appearing the same message.

I see you are on a double-socketed CPU board. Have you set NUMA to the VM?
I have tested it and activating it or not doesnt solve the perfomance problem. What is NUMA for?
 
Last edited:
What is NUMA for?
See here.

Just wondering; you set the VM to 2 sockets, what performance gain (if at all) do you find?

i have the Tesla K20Xm so i think it could work for my needs, is that right?
IIRMC I don't think that has enabled Video acceleration (it uses gk110 - I think). Its also 12 years old.

EDIT: I solve the Spice being locked changing the Screen option.
Also try setting for Spice (in GUI); VM (left pane), Hardware, Display, Edit & set Memory to 64MB. I found a huge gain on various VMs.
 
As you can see, i have the Tesla K20Xm so i think it could work for my needs, is that right?

1712734400053.png
Yes, got NVENC v1 and NVDEC v1. Its more the question what codecs it supports. But you can only pass it through to a single VM and then neither the host nor any other VM/LXC could use it anymore.

I open it as i see it downloaded, It probably won't take more than 2 seconds until I open the file. Anyways, still not working, appearing the same message.
2 seconds should be fine. Some more seconds could be too much.

I have tested it and activating it or not doesnt solve the perfomance problem. What is NUMA for?
So that the hypervisor knows about about your two sockets to better handle ressources. Think of dual socket systems like two single CPU PCs connected via ethernet. One you access local hardware it will be fast and slow if you want to access hardware over the network on the other machine. Just that it isn't ethernet but PCIe lanes connecting both sockets.
Lets say you got a VM with 2 cores and 2GB of RAM. With CPU1 connected to RAM A+B and CPU2 connected to RAM C+D. With NUMA enabled PVE will try to assign both vCPU and the 2GB RAM to a single socket. So either CPU 1 + RAM A+B or CPU2 + RAM C+D so all ressources could be directly accesses without needing to go over the slow link between the sockets. With NUMA disabled PVE might not care about that and will put one vCPU on CPU1 and one on CPU2 with all 2GB RAM on RAM C. The vCPU running on CPU2 will then be fast as the RAM is directly connected to CPU2 but the other vCPU on CPU1 will be slow as it can't directly access the RAM. Similar problem with PCIe slots and onboard hardware. Some hardware is connected to socket 1 and some to socket 2. Any only one socket is able to access that hardware with full performance.
Thats why a dual socket platform will never be as fast as two single socket machines.
 
Last edited:
Just wondering; you set the VM to 2 sockets, what performance gain (if at all) do you find?
I usually put all virtual machines with 2 sockets and 2/4 cores so that the power is divided. It is not a production server, I suppose I do it so as not to "use up" any of the CPUs excessively. This is probably nonsense but, there it is, done.

Also try setting for Spice (in GUI); VM (left pane), Hardware, Display, Edit & set Memory to 64MB. I found a huge gain on various VMs.
I have tested it too, but with the same result

Yes, got NVENC v1 and NVDEC v1. Its more the question what codecs it supports. But you can only pass it through to a single VM and then neither the host nor any other VM/LXC could use it anymore.
You mean that the graphics card can only be used in one virtual machine at a time, right? If I configure passthrough on several machines but only use one, it will work, right?

2 seconds should be fine. Some more seconds could be too much.
I cant understand it then... I also installed the Spice guest tools in the VM just in case, but still appearing the same message again...

So that the hypervisor knows about about your two sockets to better handle ressources. Think of dual socket systems like two single CPU PCs connected via ethernet. One you access local hardware it will be fast and slow if you want to access hardware over the network on the other machine. Just that it isn't ethernet but PCIe lanes connecting both sockets.
Lets say you got a VM with 2 cores and 2GB of RAM. With CPU1 connected to RAM A+B and CPU2 connected to RAM C+D. With NUMA enabled PVE will try to assign both vCPU and the 2GB RAM to a single socket. So either CPU 1 + RAM A+B or CPU2 + RAM C+D so all ressources could be directly accesses without needing to go over the slow link between the sockets. With NUMA disabled PVE might not care about that and will put one vCPU on CPU1 and one on CPU2 with all 2GB RAM on RAM C. The vCPU running on CPU2 will then be fast as the RAM is directly connected to CPU2 but the other vCPU on CPU1 will be slow as it can't directly access the RAM. Similar problem with PCIe slots and onboard hardware. Some hardware is connected to socket 1 and some to socket 2. Any only one socket is able to access that hardware with full performance.
Thats why a dual socket platform will never be as fast as two single socket machines.
If i have understood it well... The perfomance will increase with one physical CPU and NUMA off and perfomance will be, more or less, the same with two physical CPUs and NUMA on, is that right?

EDIT: I think i have enable the PCI passthrough following some guides:
1712737182851.png

I have installed the driver in the VM:
1712737279137.png

But NoVNC still slow and well, SPICE throw me that message..
 

Attachments

  • 1712737176494.png
    1712737176494.png
    55.5 KB · Views: 5
Last edited:
You mean that the graphics card can only be used in one virtual machine at a time, right? If I configure passthrough on several machines but only use one, it will work, right?
Yes, but backups will for example fail if one of those VMs is running, as a VMs needs to be started to be able to be backuped and you can't start those other VM if the hardware is already in use.

and perfomance will be, more or less, the same with two physical CPUs and NUMA on, is that right?
Single socket with double as powerful hardware will be faster than dual socket with half as fast hardware as you can't fully avoid that link between sockets. But yes, should be faster with NUMA enabled than without enabled as the hypervisor could better optimize the workloads. And you then also don't want to set "sockets: 2" for your VMs unless a VMs needs more vCPUs or RAM than a single socket could handle.
 
AFAIK:

NUMA is a HW level system control (sometimes available as a BIOS setting) between the OS & MB/CPU controller(s).

NUMA in PVE, is a VM setting that exposes the NUMA awareness to the VM & PVE resource selection for that VM.

So for example, if you have no NUMA enabled at HW level - AFAIK it will make no difference at PVE level.

To check your NUMA status; enter numactl --hardware in PVE host (or any Linux system), as per this.
 
Yes, but backups will for example fail if one of those VMs is running, as a VMs needs to be started to be able to be backuped and you can't start those other VM if the hardware is already in use.
I understand, well i dont use proxmox for production purposes but is well to know that

Single socket with double as powerful hardware will be faster than dual socket with half as fast hardware as you can't fully avoid that link between sockets. But yes, should be faster with NUMA enabled than without enabled as the hypervisor could better optimize the workloads. And you then also don't want to set "sockets: 2" for your VMs unless a VMs needs more vCPUs or RAM than a single socket could handle.
Yeah, seems like 8 vCPUs with one socket is working better than 2 sockets with 4 cores/vCPUs

To check your NUMA status; enter numactl --hardware in PVE host (or any Linux system), as per this.
got this:

Code:
root@proxmox:~# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 32096 MB
node 0 free: 30966 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 32244 MB
node 1 free: 30862 MB
node distances:
node   0   1
  0:  10  20
  1:  20  10

Any suggestion with the SPICE problem?
 
So you're SPICE is all up to date.
Some items (from memory) that could hinder SPICE:

1. Antivirus blocking/port. Try disabling the AV & try SPICE again.
2. Certificates issue. SPICE can be finicky on this. Try shutdown VM & enter pvecm updatecerts --force on host. Then restart VM & try SPICE.
3. Try a different browser. In SPICE, the download/opening procedure of the browser/OS plays a role.
 
So you're SPICE is all up to date.
Some items (from memory) that could hinder SPICE:

1. Antivirus blocking/port. Try disabling the AV & try SPICE again.
2. Certificates issue. SPICE can be finicky on this. Try shutdown VM & enter pvecm updatecerts --force on host. Then restart VM & try SPICE.
3. Try a different browser. In SPICE, the download/opening procedure of the browser/OS plays a role.
none of the options worked :/
 
Maybe something is blocking the used port for SPICE
What does this output from PVE host:
Code:
ss -tlpn | grep spice
 
Maybe something is blocking the used port for SPICE
What does this output from PVE host:
Code:
ss -tlpn | grep spice
Code:
root@proxmox:~# ss -tlpn | grep spice
LISTEN 0      4096               *:3128             *:*    users:(("spiceproxy work",pid=1501,fd=6),("spiceproxy",pid=1500,fd=6))                               
root@proxmox:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!