Question Marks on Nodes and VMs

Jarvar

Well-Known Member
Aug 27, 2019
317
10
58
I am getting Question Marks and Greyed out Nodes and VMs.
The servers keep running, but not sure what is really happening.
It happened yesterday as well to another node. When I restated it, it seemed to work.
Any help or atleast pointers in the right direction to find out what is happening would be helpful.
These Nodes are independent of each other but they are both connected to a PBS server for backup.
using qm list shows the VM is still running and I can access the VM through gui...
Thank you.
 
Are these nodes running within a cluster?
The issue could be related to pvestatd. This daemon queries the status of VMs in the cluster and sends this information throughout the cluster. Try checking the output of pvestatd status if the problem occurs again, to see if it's running. You can also try pvestatd restart to see if the problem gets fixed without needing to reboot.
 
thank you @harry700
It is not part of a cluster. These are stand a lone nodes.

I tried pvestatd status which shows it is running.
Did a pvestatd restart and it’s the same question mark remaining.
 
Other than the question marks appearing in the GUI, does everything still behave as normal (i.e., backups still run, VMs carry out their functions, internet accessible)?
Do your nodes/VMs happen to depend on a network-connected storage?
Also, could you post your version of proxmox and its packages (pveversion -v)?
 
I thought the Server VM was running normally, but I got a call from the office that it was not serving up its SQL Express database, in which case I had restart the node since I could not start the VM itself anymore.
Here is my pveversion -v

Code:
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-9 (running version: 6.2-9/4d363c5b)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-3-pve: 5.3.13-3
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.0-11
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-9
pve-cluster: 6.1-8
pve-container: 3.1-10
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-9
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-8
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

The other node is running pve version 6.2-10, but I wasn't able to grab the pveversion now as it is offline running some diagnostic tests.
This particular node is not attached to a NAS, it has two USB drives which are passed through to the Windows 2016 Essentials VM.
However, the node with 6.2-10 with it on is attached to NFS storage off a Synology. Even though it is attached to NFS, it is for backup purposes only and stores it's VM on local storage off a zfs pool. Same for both.
Both are connected to a PBS server for remote backups.
Thanks.
 
Helo! is someone experiencing the same problem? When PBS goes offline, I am suspicious that PVE is unstable and the GUI does not respond correctly.
 
Hi,
Helo! is someone experiencing the same problem? When PBS goes offline, I am suspicious that PVE is unstable and the GUI does not respond correctly.
which version are you running (pveversion -v)? With libpve-storage-perl >= 6.2-8 there were changes to improve/fix this. If you're already running a newer version, please provide the output of systemctl status pvestatd.service.
 
  • Like
Reactions: Joao Correa
Hi,

which version are you running (pveversion -v)? With libpve-storage-perl >= 6.2-8 there were changes to improve/fix this. If you're already running a newer version, please provide the output of systemctl status pvestatd.service.
I'm using this version:
Code:
libpve-storage-perl: 6.2-8

Result of pveversion
Code:
proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
pve-manager: 6.2-12 (running version: 6.2-12/b287dd27)
pve-kernel-5.4: 6.2-7
pve-kernel-helper: 6.2-7
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 0.9.0-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.3-1
pve-cluster: 6.2-1
pve-container: 3.2-2
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-1
pve-qemu-kvm: 5.1.0-3
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-15
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve2

At this point I have already solved the problem on the PBS server. I don't have the log history.
At the time of the problem I ran the command:
service pveproxy restart && service pvestatd restart And the GUI worked for some time.
I am planning to update the Proxmox servers.
Is there any chance that libpve-storage-perl: 6.2-8 is still experiencing the problem?
 
  • Like
Reactions: stefano.molinaro
At this point I have already solved the problem on the PBS server. I don't have the log history.
At the time of the problem I ran the command:
service pveproxy restart && service pvestatd restart And the GUI worked for some time.
I am planning to update the Proxmox servers.
Is there any chance that libpve-storage-perl: 6.2-8 is still experiencing the problem?
If restarting the services helped, maybe they were still using the old version of the storage library for some reason?

Yes, please upgrade and report back if the problem persists.
 
  • Like
Reactions: Joao Correa
Just had this issue in PVE 7.0-11. I added some SSDs with 520 byte blocks. The pvestatd service was still running, and restarting it did nothing. Once the block size was changed to 4k, the gray question mark went away several minutes later. Here's some commands I found helpful when troubleshooting this issue:
Bash:
systemctl -l status pvestatd
tail -f /var/log/syslog
tail -f /var/log/messages
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!