Using vncwebsocket via API - LXC works while QEMU doesnt

yswery

Well-Known Member
May 6, 2018
83
5
48
54
Hi all

Been butting my head against the wall for the last few days over this issue. I was hoping if maybe someone has had the same and found a solution

Summery:

1) We are using the API (tokens only, no cookies) to get the vnc credentials using the vncproxy endpoint with the added flag of websocket set as 1 (works on both LXC machines and QEMU)
2) Later we start the vnc websocket stream using the vncwebsocket endpoint with the returned vnc port, ticket and ticket for the password. This works flawlessly every time on all LXC machines, however doesnt work on any QEMU machines (all on the same node and set up)

I can see that the codebase in Proxmox is totally different between the LXC and QEMU vncproxy and vncwebsocket which is why I was wondering if anyone else had the same issue, or is this a bug?


============

Example of successful LXC connection:

Snippet from /var/log/pveproxy/access.log:


Code:
::ffff:138.84.555.555 - root@pam!zmy-api-user [03/06/2023:14:08:04 -0400] "POST /api2/json/nodes/my_node/lxc/105/vncproxy HTTP/1.1" 200 2639
::ffff:185.121.168.9 - root@pam!my-api-user [03/06/2023:14:08:09 -0400] "GET /api2/json/nodes/my_node/lxc/105/vncwebsocket?port=5903&vncticket=PVEVNC%3A647B8182%3A%3AiELyrwBfMqydIzb1wi4%2BCc1pr9rk8NYK8HGT5HMIfFFcqBfUaDpv3LnB8Mcokl%2FB46mleFl8u5LhN3seVYTIEL%2Flc6p%2FV9BJX1vvm8XKRMaZFvVD0eKZa9j%2BsscDltuqwFWgfGNBG%2F7gXQrBxa7D9WuoebpTHTacK9n0SirGFqvHugjh0jEN4OttKNci3970ViD6q86lt2tigZPMb8ZngO0IvHylAw5vNy%2BkZPcQrCJ43tHHHETUBAD0%2BvKRO4Can%2B7CSg9U8crubiDPj3TCBI2mN%2B1mdHPo1gzI6F3cT3rVvX7L1bBDr3%2BaA6aSaLSCB%2FW8Dck%2BAOyWY26sve%2BGyQ%3D%3D HTTP/1.1" 101 -

Logs from our reverse nginx reverse proxy (where all API tokens are injected)

JSON:
{
  "time_local": "04/Jun/2023:06:13:41 +1200",
  "http_host": "vnc-proxy.domain.com",
  "remote_addr": "10.52.0.2",
  "proxy_host": "my_name.domain.net:8006",
  "upstream_addr": "10.12.13.14:8006",
  "upstream_uri": "/api2/json/nodes/my_node/lxc/105/vncwebsocket?port=5903&vncticket=PVEVNC%3A647B8182%3A%3AiELyrwBfMqydIzb1wi4%2BCc1pr9rk8NYK8HGT5HMIfFFcqBfUaDpv3LnB8Mcokl%2FB46mleFl8u5LhN3seVYTIEL%2Flc6p%2FV9BJX1vvm8XKRMaZFvVD0eKZa9j%2BsscDltuqwFWgfGNBG%2F7gXQrBxa7D9WuoebpTHTacK9n0SirGFqvHugjh0jEN4OttKNci3970ViD6q86lt2tigZPMb8ZngO0IvHylAw5vNy%2BkZPcQrCJ43tHHHETUBAD0%2BvKRO4Can%2B7CSg9U8crubiDPj3TCBI2mN%2B1mdHPo1gzI6F3cT3rVvX7L1bBDr3%2BaA6aSaLSCB%2FW8Dck%2BAOyWY26sve%2BGyQ%3D%3D",
  "upstream_status": "101"
}

As you can see the WebSocket 101 is all successful works great, while when trying using the same code, same set up, same node, same everything really but on a QEMU machine we get the following:

Example of non-successful QEMU connection:

Code:
::ffff:138.84.555.555 - root@pam!zmy-api-user [03/06/2023:14:00:04 -0400] "POST /api2/json/nodes/my_node/qemu/101/vncproxy HTTP/1.1" 200 2661

Note1: that only the vncproxy shows on up the /var/log/pveproxy/access.log, the actual vncwebsocket never shows up... I assume because it errors somewhere)
Note2: There is no errors at all or any logs showing up in syslog throughout the QEMU failed attempts on vncwebsocket endpoint


Logs from our reverse nginx reverse proxy (where all API tokens are injected) showing a 502 instead of the expected and success 101 on the LXC machines

JSON:
{
  "time_local": "04/Jun/2023:02:06:09 +1200",
  "http_host": "vnc-proxy.domain.com",
  "remote_addr": "10.52.0.2",
  "proxy_host": "my_name.domain.net:8006",
  "upstream_addr": "10.12.13.14:8006",
  "upstream_uri": "/api2/json/nodes/my_node/qemu/100/vncwebsocket?port=5902&vncticket=PVEVNC%3A647B48C6%3A%3AflrdYsN7yymtllNtzAtXFQajNZFHc%2BnnAF9S8GpcVh0xxGJLRyvGz6thxEbJq26iWkQtpYWdqCfMJ1v7wx4N6iPgsPEKd06nj0%2BwBafoMHVnEABJCshXIMzhVd%2FXvEu7E35SWivfVxTJXTID7jz17EyAHecS8PGhLKrQJ2zFm3igFDpIEYmlkjYVRyZO68fIlGRGejyT0caXbkP8I7Q7bVsz14LIzdACOidkJyPX%2Beb8U5pyG7R3ElASbmFAZc4xOHNdZvmwncjK%2BcZxTHiuwp07RPxZvhOzY4qUReaJVIEDRD8J8AKOK8dC3lWYU%2BXXwXkBc%2BNdUXeBhgTLeZOjYw%3D%3D",
  "upstream_status": "502"
}


=======

Does anyone know where I might be going wrong and if there is something different I need to do for QEMU vs LXC?


Edit:
Thought it might be useful to also add our (slightly edited) nginx reverse proxy configuration snippet here. This is how we avoid using the cookies auth by simply proxy-ing the API Token to the requests + web-sockets (again, works perfectly with LXC but not with QEMU machines)

Code:
location / {
      proxy_pass https://pve_upstream_node:8006;
      proxy_set_header "Authorization" "PVEAPIToken=root@pam!zmy-api-user=XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXX";

      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
}
 
Last edited:
Have you solved this problem yet? I have also encountered the same problem.
Nginx say : upstream prematurely closed connection while reading response header from upstream
No log in pve access.log
 
Have you solved this problem yet? I have also encountered the same problem.
Nginx say : upstream prematurely closed connection while reading response header from upstream
No log in pve access.log
So over a year ago and I cant FULLY recall the solution but I do think it was the following:

Proxmox really didnt like the VNC ticket being created on any other node in the cluster than the one that which hosts the VM/CT you are trying to do the vncproxy for.

So long story short, forcing the vncticket creation and the vncproxy to be called on the specific node in the cluster that the CT/VM lives in solved the issue for me.

Try hard coding the API calls with the specific node and and see if it helps solves the issue and let us know. If not I can try digging deeping into my (working) vnc creation code and get back you after.

EDIT: I see in your other posts you wrote "Sometimes it can work for lxc vncwebsocket, but more often it raise 502." which leads me to thinking you are selecting your PVE nodes from your cluster at random. Which is inline with why your creation of the vncsocket is failing. Let us know how you get on with your tests when hard coding the node + vmid (as a test)
 
Last edited:
So over a year ago and I cant FULLY recall the solution but I do think it was the following:

Proxmox really didnt like the VNC ticket being created on any other node in the cluster than the one that which hosts the VM/CT you are trying to do the vncproxy for.

So long story short, forcing the vncticket creation and the vncproxy to be called on the specific node in the cluster that the CT/VM lives in solved the issue for me.

Try hard coding the API calls with the specific node and and see if it helps solves the issue and let us know. If not I can try digging deeping into my (working) vnc creation code and get back you after.
Thanks. I only have one pve node.
After I restart pveproxy service, the vncwebsocket worked, but I meat another problem...

If you are interested, you can check https://forum.proxmox.com/threads/vncwebsocket-502-by-proxy.153834/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!