Proxmox WebUI not working

NaysKutzu

New Member
Aug 30, 2024
8
0
1
So i have a web ui for proxmox and i added 3 nodes the main node is the raspberry pi which is there just for the webui and nothing else since i use that pi with a cloudflare tunnel but the problem is that one of the node shows as it's loading, but it never loads!
1725265301798.png

This is just for this single pve node since others work just fine:

1725265320942.png

Now on the pve1 node there is nothing installed just proxmox and the VM's and CT's

And i don't really know what to do to fix it i tried everything and managing it from the webui is kinda a requirement!
 
Since all the VM's show as online, it looks like the pi itself can reach the host, but I'm suspecting that you just aren't able to connect directly via the VPN-tunnel (and haven't tested it from an in-network machine) where you can/could for pve2.
In that case static routes or other gateways are the most probably cause.
Open the shall on either of the two nodes and ssh root@pve to see if you can reach that server. If you can, then we at least have double-checked that the connection is good, so next run an ip r and a cat /etc/network/interfaces and post the results here.
 
Hi, so yeah i don't use a vpn its just a tunnel to expose the 8006 port to the internet so we can manage the servers if we are not in the location so we can use the gataway:

# interfaces(5) file used by ifup(8) and ifdown(8)
# Include files from /etc/network/interfaces.d:
#source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

iface eth0 inet static
auto vmbr0

iface vmbr0 inet static
address 192.168.0.2/24
gateway 192.168.0.1
bridge-ports eth0
bridge-stp off
bridge-fd 0
 
I suppose that the interfaces configuration is for the raspberrypi. It would be helpful if you could test if you're able to connect to the other nodes through SSH and if there is any error as @sw-omit already mentioned. Also check if your network has any IP address collisions, especially with any of the nodes (e.g. a VM or another device uses the same IP address in the network). Lastly, check the health of your cluster from the raspberrypi's shell with pvecm status and journalctl -b 0 -u pveproxy -u pve-cluster.
 
Aug 19 11:24:21 raspberrypi systemd[1]: Starting pve-cluster.service - The Proxmox VE cluster filesystem...
Aug 19 11:24:21 raspberrypi pmxcfs[1169]: [main] notice: resolved node name 'raspberrypi' to '192.168.0.2' for default node IP address
Aug 19 11:24:21 raspberrypi pmxcfs[1169]: [main] notice: resolved node name 'raspberrypi' to '192.168.0.2' for default node IP address
Aug 19 11:24:21 raspberrypi pmxcfs[1189]: [quorum] crit: quorum_initialize failed: 2
Aug 19 11:24:21 raspberrypi pmxcfs[1189]: [quorum] crit: can't initialize service
Aug 19 11:24:21 raspberrypi pmxcfs[1189]: [confdb] crit: cmap_initialize failed: 2
Aug 19 11:24:21 raspberrypi pmxcfs[1189]: [confdb] crit: can't initialize service
Aug 19 11:24:21 raspberrypi pmxcfs[1189]: [dcdb] crit: cpg_initialize failed: 2
Aug 19 11:24:21 raspberrypi pmxcfs[1189]: [dcdb] crit: can't initialize service
Aug 19 11:24:21 raspberrypi pmxcfs[1189]: [status] crit: cpg_initialize failed: 2
Aug 19 11:24:21 raspberrypi pmxcfs[1189]: [status] crit: can't initialize service
Aug 19 11:24:22 raspberrypi systemd[1]: Started pve-cluster.service - The Proxmox VE cluster filesystem.
Aug 19 11:24:25 raspberrypi systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [status] notice: update cluster info (cluster name server01, version = 2)
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [status] notice: node has quorum
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [dcdb] notice: members: 1/1278, 2/1189
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [dcdb] notice: starting data syncronisation
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [status] notice: members: 1/1278, 2/1189
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [status] notice: starting data syncronisation
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [dcdb] notice: received sync request (epoch 1/1278/00000003)
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [status] notice: received sync request (epoch 1/1278/00000003)
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [dcdb] notice: received all states
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [dcdb] notice: leader is 1/1278
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [dcdb] notice: synced members: 1/1278
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [dcdb] notice: waiting for updates from leader
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [dcdb] notice: update complete - trying to commit (got 2 inode updates)
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [dcdb] notice: all data is up to date
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [status] notice: received all states
Aug 19 11:24:27 raspberrypi pmxcfs[1189]: [status] notice: all data is up to date
Aug 19 11:24:28 raspberrypi pveproxy[1562]: starting server
Aug 19 11:24:28 raspberrypi pveproxy[1562]: starting 3 worker(s)
Aug 19 11:24:28 raspberrypi pveproxy[1562]: worker 1563 started
Aug 19 11:24:28 raspberrypi pveproxy[1562]: worker 1564 started
Aug 19 11:24:28 raspberrypi pveproxy[1562]: worker 1565 started
Aug 19 11:24:28 raspberrypi systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Aug 19 11:47:22 raspberrypi pmxcfs[1189]: [dcdb] notice: data verification successful
Aug 19 12:47:22 raspberrypi pmxcfs[1189]: [dcdb] notice: data verification successful
Aug 19 13:47:22 raspberrypi pmxcfs[1189]: [dcdb] notice: data verification successful
Aug 19 13:52:58 raspberrypi pveproxy[1564]: proxy detected vanished client connection
Aug 19 13:52:58 raspberrypi pveproxy[1565]: proxy detected vanished client connection
Aug 19 13:52:58 raspberrypi pveproxy[1565]: proxy detected vanished client connection
Aug 19 13:53:00 raspberrypi pveproxy[1563]: proxy detected vanished client connection
Aug 19 13:53:02 raspberrypi pveproxy[1564]: proxy detected vanished client connection
Aug 19 14:43:46 raspberrypi pveproxy[1563]: proxy detected vanished client connection
Aug 19 14:43:46 raspberrypi pveproxy[1564]: proxy detected vanished client connection
Aug 19 14:46:02 raspberrypi pveproxy[1564]: proxy detected vanished client connection
Aug 19 14:46:02 raspberrypi pveproxy[1564]: proxy detected vanished client connection
Aug 19 14:46:03 raspberrypi pveproxy[1564]: proxy detected vanished client connection
Aug 19 14:46:32 raspberrypi pveproxy[1564]: proxy detected vanished client connection
Aug 19 14:47:22 raspberrypi pmxcfs[1189]: [dcdb] notice: data verification successful
Aug 19 15:09:34 raspberrypi pveproxy[1565]: proxy detected vanished client connection
Aug 19 15:09:49 raspberrypi pveproxy[1563]: proxy detected vanished client connection
Aug 19 15:10:11 raspberrypi pveproxy[1563]: proxy detected vanished client connection
Aug 19 15:11:41 raspberrypi pveproxy[1564]: proxy detected vanished client connection
Aug 19 15:12:10 raspberrypi pveproxy[1565]: proxy detected vanished client connection
Aug 19 15:47:22 raspberrypi pmxcfs[1189]: [dcdb] notice: data verification successful
Aug 19 16:47:22 raspberrypi pmxcfs[1189]: [dcdb] notice: data verification successful
Aug 19 17:47:22 raspberrypi pmxcfs[1189]: [dcdb] notice: data verification successful
Aug 19 18:47:22 raspberrypi pmxcfs[1189]: [dcdb] notice: data verification successful
Aug 19 19:47:22 raspberrypi pmxcfs[1189]: [dcdb] notice: data verification successful
Aug 19 20:47:22 raspberrypi pmxcfs[1189]: [dcdb] notice: data verification successful
Aug 19 21:47:22 raspberrypi pmxcfs[1189]: [dcdb] notice: data verification successful
 
-------------------
Name: server01
Config Version: 3
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Tue Sep 3 10:16:27 2024
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000002
Ring ID: 1.838
Quorate: Yes

Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.0.3
0x00000002 1 192.168.0.2 (local)
0x00000003 1 192.168.0.80
 
I take it the 0.80 is the pi that you're using for your tunnel, and the 0.2 the "pve" node in this cluster?
If so, you haven't shown the result of ip r, but I'm suspecting that you might not have a route set up for vpn-traffic to not go to the default gateway, but to the 0.80 instead (OTHER traffic of course SHOULD go to the 0.1)
the ip r results of the pve2 / 192.168.0.3 might also be usefull, since it looks like it does work for that node.
 
Wait, in your logs I now notice that it says (or at least thinks) your pi is on the 0.2 IP???
Please tell us which device is which (internal) IP + show the ip r results of both the "pve" and "pve2" nodes
 
PVE:
Code:
[root@pve nayskutzu]$ ip r
default via 192.168.0.1 dev vmbr0 proto kernel onlink
192.168.0.0/24 dev vmbr0 proto kernel scope link src 192.168.0.3
[root@pve nayskutzu]$ ^C
[root@pve nayskutzu]$

PVE2:
Code:
nayskutzu@pve2:~$ ip r
default via 192.168.0.1 dev vmbr0 proto kernel onlink
192.168.0.0/24 dev vmbr0 proto kernel scope link src 192.168.0.80
nayskutzu@pve2:~$

PI:
Code:
nayskutzu@raspberrypi:~ $ ip r
default via 192.168.0.1 dev vmbr0 proto kernel onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.18.0.0/16 dev br-e590bf63ec6e proto kernel scope link src 172.18.0.1 linkdown
192.168.0.0/24 dev vmbr0 proto kernel scope link src 192.168.0.2
nayskutzu@raspberrypi:~ $ ^C
 
Very strange that PVE2 does work while PVE does not with the same config...
You could try adding a (temporary) static route to check if that is the issue here, by running the below on PVE(1):
Code:
ip route add 172.17.0.0/16 via 192.168.0.2 dev vmbr0
ip route add 172.18.0.0/16 via 192.168.0.2 dev vmbr0
If you want to revert, just do the same commands with "delete" instead of "add"
 
Have you checked that you have no IPv4 address collisions in your network, i.e. that 192.168.0.3 is only used for your pve node? Have you also checked the output from pvecm status from the pve node itself? Have you tried to restart the following services on your nodes?

Code:
systemctl restart pve-cluster
systemctl restart pveproxy
systemctl restart pvedaemon
systemctl restart pvestatd
 
Am I missing something here? Raspberry pi is arm architecture, a platform not supported by Proxmox itself.
 
Yeah, it's not officially support actually, but you can still install it, but the issue is not related to the PI or the arm architecture itself and it's on a AM4 CPU which is not arm, and it's AMD so it should not be a problem actually so I'm looking for support for the pve witch is an amd machine and has native proxmox installed using the iso same for pve2. But some reason pve broke and it is like this for like a month!
 
but the issue is not related to the PI or the arm architecture itself
I can't completely agree with that since you have all three nodes clustered together, so impact is possible cluster-wide.
Try removing the Pi from the cluster & replace with a native Proxmox-installed node.
 
I tried that and it does not help and if it was the PI at fault it should not allow the pve2 node to work!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!