qm command not responding

chchang

Well-Known Member
Feb 6, 2018
33
4
48
46
CPU: E3-1220 V3 x2
RAM: 8G
Proxmox ve: 5.1-43

this is a node in my proxmox cluster , dont know why and when , I cant run command from web or shell.

in web , migrate/backup will just no working with any error message
td9v9lH.png


and that job can not stop.

in console , the qm command just hangs without any error messages
AlSshMS.png


where can I find more detail log ?

and I cant see the content of /etc/pve/pven2 , ls or du or any other commands will hang just like qm.
 
I would guess, you are out of memory and services are hanging. Check your syslog/journal for more information.
 
but all the vm still works .
syslog
Code:
Jun  5 16:08:32 pven2 pvedaemon[11120]: <root@pam> starting task UPID:pven2:00003044:07CB1A85:5B164500:vzmigrate:109:root@pam:
Jun  5 16:08:33 pven2 corosync[1527]: error   [CPG   ] *** 0x55b4db58ba10 can't mcast to group  state:0, error:12
Jun  5 16:08:33 pven2 corosync[1527]:  [CPG   ] *** 0x55b4db58ba10 can't mcast to group  state:0, error:12
Jun  5 16:16:05 pven2 pmxcfs[1511]: [status] notice: received log
Jun  5 16:17:01 pven2 CRON[13480]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun  5 16:18:17 pven2 pmxcfs[1511]: [status] notice: received log
Jun  5 16:18:17 pven2 systemd[1]: Started Session 469 of user root.
Jun  5 16:18:38 pven2 pmxcfs[1511]: [status] notice: received log
Jun  5 16:18:39 pven2 systemd[1]: Started Session 470 of user root.
Jun  5 16:21:52 pven2 pvedaemon[11120]: worker exit
Jun  5 16:21:52 pven2 pvedaemon[1627]: worker 11120 finished
Jun  5 16:21:52 pven2 pvedaemon[1627]: starting 1 worker(s)
Jun  5 16:21:52 pven2 pvedaemon[1627]: worker 14163 started
Jun  5 16:22:51 pven2 pveproxy[10930]: worker exit
Jun  5 16:22:51 pven2 pveproxy[1675]: worker 10930 finished
Jun  5 16:22:51 pven2 pveproxy[1675]: starting 1 worker(s)
Jun  5 16:22:51 pven2 pveproxy[1675]: worker 14293 started
Jun  5 16:23:49 pven2 kernel: [1308420.233789] usb 3-6: USB disconnect, device number 2
Jun  5 16:31:05 pven2 pmxcfs[1511]: [status] notice: received log
Jun  5 16:39:48 pven2 pvedaemon[14163]: worker exit
Jun  5 16:39:48 pven2 pvedaemon[1627]: worker 14163 finished
Jun  5 16:39:48 pven2 pvedaemon[1627]: starting 1 worker(s)
Jun  5 16:39:48 pven2 pvedaemon[1627]: worker 16482 started
Jun  5 16:43:58 pven2 pmxcfs[1511]: [status] notice: received log
Jun  5 16:44:08 pven2 pmxcfs[1511]: [status] notice: received log
Jun  5 16:44:29 pven2 pmxcfs[1511]: [status] notice: received log
Jun  5 16:46:06 pven2 pmxcfs[1511]: [status] notice: received log
Jun  5 16:47:11 pven2 corosync[1527]: notice  [CFG   ] Config reload requested by node 1
Jun  5 16:47:11 pven2 corosync[1527]:  [CFG   ] Config reload requested by node 1
Jun  5 16:47:14 pven2 corosync[1527]: notice  [TOTEM ] A new membership (192.168.11.210:72) was formed. Members joined: 3
Jun  5 16:47:14 pven2 corosync[1527]:  [TOTEM ] A new membership (192.168.11.210:72) was formed. Members joined: 3
Jun  5 16:47:14 pven2 corosync[1527]: notice  [QUORUM] Members[3]: 1 2 3
Jun  5 16:47:14 pven2 corosync[1527]: notice  [MAIN  ] Completed service synchronization, ready to provide service.
Jun  5 16:47:14 pven2 corosync[1527]:  [QUORUM] Members[3]: 1 2 3
Jun  5 16:47:14 pven2 corosync[1527]:  [MAIN  ] Completed service synchronization, ready to provide service.
Jun  5 16:47:19 pven2 pmxcfs[1511]: [status] notice: members: 1/2275, 2/1511, 3/2299
Jun  5 16:47:19 pven2 pmxcfs[1511]: [status] notice: starting data syncronisation
Jun  5 16:47:19 pven2 pmxcfs[1511]: [status] notice: received sync request (epoch 1/2275/00000003)
Jun  5 16:47:20 pven2 pmxcfs[1511]: [status] notice: received all states
Jun  5 16:47:20 pven2 pmxcfs[1511]: [status] notice: all data is up to date
Jun  5 16:47:21 pven2 pmxcfs[1511]: [status] notice: received log
Jun  5 16:50:04 pven2 pveproxy[14293]: worker exit
Jun  5 16:50:04 pven2 pveproxy[1675]: worker 14293 finished
Jun  5 16:50:04 pven2 pveproxy[1675]: starting 1 worker(s)
Jun  5 16:50:04 pven2 pveproxy[1675]: worker 17833 started
Jun  5 16:53:41 pven2 pmxcfs[1511]: [status] notice: received log
Jun  5 16:56:54 pven2 rrdcached[1480]: flushing old values
Jun  5 16:56:54 pven2 rrdcached[1480]: rotating journals
Jun  5 16:56:54 pven2 rrdcached[1480]: started new journal /var/lib/rrdcached/journal/rrd.journal.1528189014.077074
Jun  5 16:56:54 pven2 rrdcached[1480]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1528181814.077051
Jun  5 16:59:05 pven2 pmxcfs[1511]: [status] notice: received log
Jun  5 17:01:06 pven2 pmxcfs[1511]: [status] notice: received log

I add a new node , but I cant migrate form this node to new node.
is there anyway to stop all service to release the resources ?
or anyother way to migrate without qm command ?

all my VMs were store in NAS , any way to import the vm in nas ? must create a new VM in local storage and copy file from nas then rename the disk filename ?
 
Last edited:
but I cant shutdown the VM in web or shell , any other way to shutdown VM ?
 
Normal shutdown inside the VM or 'kill -9 <PID>'
 
I can not copy the conf in shell , I can`t even ls /etc/pve/nodes/pven2 , any command run in that folder will just hangs...
 
Last edited:
I can not copy the conf in shell , I can`t even ls /etc/pve/nodes/pven2 , any command run in that folder will just hangs...
OFC, you need to do it on the remaining good nodes in the cluster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!