ProxmoxVE web shows status unknown

adamwitton

New Member
Dec 4, 2023
2
0
1
Hi all,

I am trying to use the clone function to create a new VM. If I only execute the process once, it works fine. However, if I attempt to run it multiple times simultaneously, the following error occurs:

圖片1.png

At the time of the error occurrence, I tried to find PVE-related logs but couldn't find any warning messages indicating the cause of this issue.

pveproxy.png
root@proxmox2-2:~# journalctl -efu pveproxy.service

-- Journal begins at Fri 2023-12-01 15:04:50 CST. --
……
Dec 08 13:55:40 proxmox2-2 pveproxy[1408]: worker 679914 finished
Dec 08 13:55:40 proxmox2-2 pveproxy[1408]: starting 1 worker(s)
Dec 08 13:55:40 proxmox2-2 pveproxy[1408]: worker 682954 started
Dec 08 13:55:41 proxmox2-2 pveproxy[682953]: got inotify poll request in wrong process - disabling inotify
Dec 08 13:55:43 proxmox2-2 pveproxy[682953]: worker exit
Dec 08 13:57:57 proxmox2-2 pveproxy[680937]: worker exit
Dec 08 13:57:57 proxmox2-2 pveproxy[1408]: worker 680937 finished
Dec 08 13:57:57 proxmox2-2 pveproxy[1408]: starting 1 worker(s)
Dec 08 13:57:57 proxmox2-2 pveproxy[1408]: worker 683307 started
Dec 08 14:00:08 proxmox2-2 pveproxy[680671]: worker exit
Dec 08 14:00:08 proxmox2-2 pveproxy[1408]: worker 680671 finished
Dec 08 14:00:08 proxmox2-2 pveproxy[1408]: starting 1 worker(s)
Dec 08 14:00:08 proxmox2-2 pveproxy[1408]: worker 683640 started
Dec 08 14:05:10 proxmox2-2 pveproxy[682954]: worker exit
Dec 08 14:05:10 proxmox2-2 pveproxy[1408]: worker 682954 finished
Dec 08 14:05:10 proxmox2-2 pveproxy[1408]: starting 1 worker(s)
Dec 08 14:05:10 proxmox2-2 pveproxy[1408]: worker 684406 started
Dec 08 14:09:57 proxmox2-2 pveproxy[683640]: worker exit
Dec 08 14:09:57 proxmox2-2 pveproxy[1408]: worker 683640 finished
Dec 08 14:09:57 proxmox2-2 pveproxy[1408]: starting 1 worker(s)
Dec 08 14:09:57 proxmox2-2 pveproxy[1408]: worker 684936 started
Dec 08 14:11:02 proxmox2-2 pveproxy[684406]: proxy detected vanished client connection
Dec 08 14:11:02 proxmox2-2 pveproxy[684406]: proxy detected vanished client connection
Dec 08 14:11:03 proxmox2-2 pveproxy[683307]: proxy detected vanished client connection
Dec 08 14:11:20 proxmox2-2 pveproxy[684406]: proxy detected vanished client connection
Dec 08 14:11:20 proxmox2-2 pveproxy[683307]: proxy detected vanished client connection
Dec 08 14:12:35 proxmox2-2 pveproxy[683307]: proxy detected vanished client connection
Dec 08 14:13:34 proxmox2-2 pveproxy[684936]: proxy detected vanished client connection
Dec 08 14:14:09 proxmox2-2 pveproxy[684406]: proxy detected vanished client connection
pvestatd.png
root@proxmox2-2:~# journalctl -efu pvestatd.service

-- Journal begins at Fri 2023-12-01 15:04:50 CST. --
……
Dec 08 13:51:44 proxmox2-2 pvestatd[681158]: status update time (8.053 seconds)
Dec 08 14:03:49 proxmox2-2 pvestatd[681158]: status update time (13.025 seconds)
pvedaemon.png
root@proxmox2-2:~# journalctl -efu pvedaemon.service

-- Journal begins at Fri 2023-12-01 15:04:50 CST. --
……
Dec 08 13:51:18 proxmox2-2 pvedaemon[681054]: <ethan.kao@cns-ldap> starting task UPID:proxmox2-2:000A68D2:0175D2C0:6572AED6:qmclone:108:ethan.kao@cns-ldap:
Dec 08 13:51:25 proxmox2-2 pvedaemon[681052]: <ethan.kao@cns-ldap> starting task UPID:proxmox2-2:000A6919:0175D534:6572AEDD:qmclone:108:ethan.kao@cns-ldap:
Dec 08 14:03:17 proxmox2-2 pvedaemon[681052]: worker exit
Dec 08 14:03:17 proxmox2-2 pvedaemon[681051]: worker 681052 finished
Dec 08 14:03:17 proxmox2-2 pvedaemon[681051]: starting 1 worker(s)
Dec 08 14:03:17 proxmox2-2 pvedaemon[681051]: worker 684142 started
Dec 08 14:03:32 proxmox2-2 pvedaemon[681054]: <ethan.kao@cns-ldap> starting task UPID:proxmox2-2:000A709F:0176F165:6572B1B4:qmclone:108:ethan.kao@cns-ldap:
Dec 08 14:06:46 proxmox2-2 pvedaemon[681054]: worker exit
Dec 08 14:06:46 proxmox2-2 pvedaemon[681051]: worker 681054 finished
Dec 08 14:06:46 proxmox2-2 pvedaemon[681051]: starting 1 worker(s)
Dec 08 14:06:46 proxmox2-2 pvedaemon[681051]: worker 684589 started
Dec 08 14:06:50 proxmox2-2 pvedaemon[684142]: <root@pam> successful auth for user 'sam.huang@cns-ldap'

Additionally, I observed the progress output of the ongoing cloning task and noticed that the virtual machine undergoing the latest cloning process (ID: 166) did not display any progress output.

Although the progress for other virtual machines reached 100%, there was no sign of "TASK OK" even after waiting for a prolonged period.

create.png
statuses.png

In the occurrence of an unknown state, I observed that the unknown state initially appears on three storages and quickly extends to the virtual machine.

未知產生的順序標示.png

The cloning process in the task continues to run under such circumstances, but the log interface does not display any output.

output.png

I managed to restart the service using the command systemctl restart pvedaemon and waited for about a minute for it to succeed. After the screen was restored, the three virtual machines executing the cloning process were locked.

restart pvedaemon.png

When I unlocked and attempted to delete one of the interrupted virtual machines, I found that it had been running the deletion process for a long time, ultimately leading to the recurrence of an unknown state.

destroy.png
 
After running systemctl restart pvedaem on proved ineffective, I switched to executing systemctl restart pvestatd. While this successfully restored the VM's status, the storage, however, still maintained the original issue.

Error.png

I'm wondering if this issue could be related to the Storage? I attempted the same operation on different nodes and found that each node encountered the same issue:
NodeHardwareThe error occurs when attempting to run the following number of execution at same time
1CPU 24 Core / Memory 188.84 GB / 93.93 GB2
2CPU 24 Core / Memory 188.84 GB / 93.93 GB4
3CPU 32 Core / Memory 362.09 GB / 93.93 GB3

Hardware.png

Below are the CPU, memory, and disk I/O metrics at the time when the error occurred:

Grafana.png

Additionally, VMs configured as templates have QEMU Guest Agent enabled. I would like to know if this feature also has a direct impact?

Agent.png

This is my pve version.
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-3
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.6.3
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20221111-1
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Here is benchmark on this server.
CPU BOGOMIPS: 166141.12
REGEX/SECOND: 1805775
HD SIZE: 93.93 GB (/dev/mapper/pve-root)
BUFFERED READS: 423.11 MB/sec
AVERAGE SEEK TIME: 5.49 ms
FSYNCS/SECOND: 69.89
DNS EXT: 179.05 ms
DNS INT: 193.01 ms (cnsdomain.com)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!