Webinterface not showing VM status

mircsicz

Well-Known Member
Sep 1, 2015
62
4
48
near Frankfurt
Hi all,

after upgrading to v6 I can't see the status of my vm's. qm list in console give's me the expected info's and there are no LXC's to check if those would show...

Code:
root@pve ~ # pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.21-5-pve)
pve-manager: 6.0-15 (running version: 6.0-15/52b91481)
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-4
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-8
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-11
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-9
pve-cluster: 6.0-9
pve-container: 3.0-13
pve-docs: 6.0-9
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-8
pve-firmware: 3.0-4
pve-ha-manager: 3.0-5
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-16
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

Code:
root@pve ~ # pveproxy status
running

Code:
root@pve ~ # qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID
100 pbx running  512 16.00 3799
101 mgr running  1024 7.00 1308
       102 dc                   running    12288             55.00 13486

hoping to get a hint on howto get back to working webinterface ;-)
 
what's the output of systemctl status pvestatd ?
 
  • Like
Reactions: franko5
Hi @fabian

thx for your reply:
Code:
root@pve ~ # systemctl status pvestatd
● pvestatd.service - PVE Status Daemon
   Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-11-21 11:59:22 CET; 3 days ago
 Main PID: 36694 (pvestatd)
    Tasks: 3 (limit: 9830)
   Memory: 36.2M
   CGroup: /system.slice/pvestatd.service
           ├─24289 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
           ├─36694 pvestatd
           └─36817 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count

Nov 21 11:59:32 pve pvestatd[36694]: VM 103 qmp command failed - VM 103 qmp command 'balloon' failed - Invalid parameter type for 'value', expected: integer
Nov 21 11:59:32 pve pvestatd[36694]: VM 103 qmp command 'balloon' failed - Invalid parameter type for 'value', expected: integer
Nov 21 11:59:32 pve pvestatd[36694]: VM 104 qmp command failed - VM 104 qmp command 'balloon' failed - Invalid parameter type for 'value', expected: integer
Nov 21 11:59:32 pve pvestatd[36694]: VM 104 qmp command 'balloon' failed - Invalid parameter type for 'value', expected: integer
Nov 21 11:59:32 pve pvestatd[36694]: VM 105 qmp command failed - VM 105 qmp command 'balloon' failed - Invalid parameter type for 'value', expected: integer
Nov 21 11:59:32 pve pvestatd[36694]: VM 105 qmp command 'balloon' failed - Invalid parameter type for 'value', expected: integer
Nov 25 08:15:27 pve systemd[1]: Reloading PVE Status Daemon.
Nov 25 08:15:28 pve pvestatd[19630]: send HUP to 36694
Nov 25 08:15:28 pve systemd[1]: Reloaded PVE Status Daemon.
Nov 25 08:15:28 pve pvestatd[36694]: received signal HUP

BTW: the balloon'ing issue is allready being dealt with in that thread
 
can you try force-refreshing the browser window? I assume by "not showing status" you mean some variant of them being greyed out in the tree on the left side? or do you get an error somewhere when viewing VM details?
 
can you try restarting pvestatd (systemctl restart pvestatd)?
 
the status output seems to indicate that there was no successful restart, there should be two messages like this in the status/log output for a regular 'reload', like we do on upgrades:
Nov 25 12:33:59 nora systemd[1]: Reloading PVE Status Daemon.
Nov 25 12:34:00 nora pvestatd[598616]: send HUP to 3867
Nov 25 12:34:00 nora pvestatd[3867]: received signal HUP
Nov 25 12:34:00 nora pvestatd[3867]: server shutdown (restart)
Nov 25 12:34:00 nora systemd[1]: Reloaded PVE Status Daemon.
Nov 25 12:34:00 nora pvestatd[3867]: restarting server

or the following for a full restart/stop, then start:
Nov 25 12:35:08 nora systemd[1]: Stopping PVE Status Daemon...
Nov 25 12:35:08 nora pvestatd[3867]: received signal TERM
Nov 25 12:35:08 nora pvestatd[3867]: server closing
Nov 25 12:35:08 nora pvestatd[3867]: server stopped
Nov 25 12:35:09 nora systemd[1]: pvestatd.service: Succeeded.
Nov 25 12:35:09 nora systemd[1]: Stopped PVE Status Daemon.
Nov 25 12:35:09 nora systemd[1]: Starting PVE Status Daemon...
Nov 25 12:35:09 nora pvestatd[600521]: starting server
Nov 25 12:35:09 nora systemd[1]: Started PVE Status Daemon.
 
It's some time ago, but I'll try again with a "restart" ... Updating that post in a Min ;-)

Code:
root@pve ~ # systemctl status pvestatd
● pvestatd.service - PVE Status Daemon
   Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2019-11-25 13:24:26 CET; 4min 26s ago
  Process: 56925 ExecStart=/usr/bin/pvestatd start (code=exited, status=0/SUCCESS)
 Main PID: 56926 (pvestatd)
    Tasks: 4 (limit: 9830)
   Memory: 99.0M
   CGroup: /system.slice/pvestatd.service
           ├─24289 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
           ├─36817 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
           ├─56926 pvestatd
           └─56981 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count

Nov 25 13:24:25 pve systemd[1]: Stopped PVE Status Daemon.
Nov 25 13:24:25 pve systemd[1]: pvestatd.service: Found left-over process 24289 (vgs) in control group while starting unit. Ignoring.
Nov 25 13:24:25 pve systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Nov 25 13:24:25 pve systemd[1]: pvestatd.service: Found left-over process 36817 (vgs) in control group while starting unit. Ignoring.
Nov 25 13:24:25 pve systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Nov 25 13:24:25 pve systemd[1]: Starting PVE Status Daemon...
Nov 25 13:24:26 pve pvestatd[56926]: starting server
Nov 25 13:24:26 pve systemd[1]: Started PVE Status Daemon.

I'll check for those PID's now ;-)
 
Last edited:
are you using iSCSI or just local LVM? vgs should not block indefinitely.. what does cat /proc/24289/stack show (same for the other, hanging PIDs)?
 
I'm running on LVM only...

Code:
root@pve ~ # cat /proc/24289/stack
[<0>] __flush_work+0x138/0x1f0
[<0>] __cancel_work_timer+0x115/0x190
[<0>] cancel_delayed_work_sync+0x13/0x20
[<0>] disk_block_events+0x78/0x80
[<0>] __blkdev_get+0x73/0x550
[<0>] blkdev_get+0x10c/0x330
[<0>] blkdev_open+0x92/0x100
[<0>] do_dentry_open+0x143/0x3a0
[<0>] vfs_open+0x2d/0x30
[<0>] path_openat+0x2bf/0x1570
[<0>] do_filp_open+0x93/0x100
[<0>] do_sys_open+0x177/0x280
[<0>] __x64_sys_openat+0x20/0x30
[<0>] do_syscall_64+0x5a/0x110
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[<0>] 0xffffffffffffffff

Code:
root@pve ~ # cat /proc/36817/stack
[<0>] disk_block_events+0x31/0x80
[<0>] __blkdev_get+0x73/0x550
[<0>] blkdev_get+0x10c/0x330
[<0>] blkdev_open+0x92/0x100
[<0>] do_dentry_open+0x143/0x3a0
[<0>] vfs_open+0x2d/0x30
[<0>] path_openat+0x2bf/0x1570
[<0>] do_filp_open+0x93/0x100
[<0>] do_sys_open+0x177/0x280
[<0>] __x64_sys_openat+0x20/0x30
[<0>] do_syscall_64+0x5a/0x110
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[<0>] 0xffffffffffffffff
 
any special hardware, especially disks/controllers/.. ? any messages in 'dmesg -k' about hanging kernel workers/threads? the next step would be a reboot. if it happens again, we can get attempt to get more detailed debugging information.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!