A stop job is running...

GoatMaster

Active Member
Oct 4, 2017
41
1
28
35
I am getting a error all the time I want to restart my node.

First I get the "A stop job is running for Monitoring of LVM2 mirrors..." and after a time it gets stuck at a watchdog error and I have to manually shutdown the node.

It only happens when I have connected shared storage over iSCSI. There are not running any vm's on this node, so there are no snapshots or anything.

How can I get rid of this error?

wuuWpE3.png
 
I guess, you are using a iSCSI storage? If so, check if all VMs/CTs are shut down before doing a reboot/shutdown, as it might be a timing issue on service stop.
 
I guess, you are using a iSCSI storage? If so, check if all VMs/CTs are shut down before doing a reboot/shutdown, as it might be a timing issue on service stop.

Currently there are no VMs used and I still get the error.
 
I'm seeing this same issue (LVM2 monitoring and device mapper) preventing a reboot even with 0 VMs/Containers running.

The iscsi target is a freenas box and I'm using multipath round robin.
 
I'm seeing this same issue (LVM2 monitoring and device mapper) preventing a reboot even with 0 VMs/Containers running.

The iscsi target is a freenas box and I'm using multipath round robin.

I still was not able to fix this issue.
I am now using smb as storage for the proxmox cluster.
 
Hi,

Same problem here with latest PVE 5.2.
iSCSI storages (Synology, OMV), but no VM/CT running on this node.

LVM2 and Device-mapper take minutes to stop, don't understand why...

Christophe.
 
Also confirmed this is happening to me with all the vm's powered off. It sometimes takes 20 mins, 60 mins or 4 hours but it eventually restarts. I am on the latest available during this time and it happens EVERY SINGLE TIME, in which I usually just use iDRAC to warm reboot the device. I believe this is a "bug" as previous versions didn't have this issue. Taking more than 3 minutes is overkill if all the VMs and shared storage (I only use iscsi for VMs and local storage for the host itself) are offline.
 
  • Like
Reactions: Uldis Valneris
Hi,

I had this same issue in my previous LAB environment ( PVE 5.4.X ) when using LVM as boot and running VMs on BtrFS RAID 1 using two extra disks for that.

Same happens if VMs are running on NAS and not using BtrFS at all.

After that I reinstalled everything using ZFS ( PVE 5.4.X ) and no more issue.

So I guess its not the hardware causing this issue.

Now I replaced the hardware installed PVE 6.X using ZFS ( RAID 1 ) and there is no issue.

In my case looks like it only happens with LVM.

Regards,

Ricardo Jorge
 
It seems to me that the problem may be in the iscsi configuration. Question is how iscsi is configured - via webgui or in the console with the iscsiadm command. I have this problem with the network shutting down before the iSCSI backend syncs when shutting down.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!