Issues with SMB/CIFS and VM Disks on it after Install/Delete via Ansible

ener

New Member
Mar 4, 2024
1
0
1
Hey guys,

so I am "playing" around in a Test-Environment with creating and deleting VMs via Ansible. My issue is the following.

Creating and deleting a VM works a couple of times but after that, the process usually stucks at deleting the VM. As Volume for the disk i choose a NAS System which is as Type SMB/CIFS connected to PVE. Never had any issues with this before but i also just started recently testing ansible against it.

So, the playbook stucks, i check the WebUI and see a pending task from the ansible User, "VM 126 - Destroy"
Deleting the Destroy Process in WebUI didn't worked out. Deleting the Processes via CLI ( ps aux | grep 126 ) also didn't worked out.
I managed to remove the Disk of the VM in WebUI but deleting the now "Unused Disk 0" also didn't works.
I receive the following Error:
Code:
can't lock file '/var/lock/qemu-server/lock-126.conf' - got timeout (500

Sometimes I also receive
Code:
trying to acquire cfs lock 'storage-synology-diskstation' ...
but I am not 100% sure when. I am trying to reproduce it currently.

Only a reboot of PVE helped so far but just for some ansible playbook runs... then everything starts over again.

Some side effects i already discovered is, i can't "ls" on the path where the disk image is. So if i do
Code:
root@pve01:~# ls /mnt/pve/synology-diskstation/images/126/
the whole CLI stucks and i have to reconnect the ssh session

What works is for example stat
Code:
root@pve01:~# stat /mnt/pve/synology-diskstation/images/126
  File: /mnt/pve/synology-diskstation/images/126
  Size: 0               Blocks: 0          IO Block: 1048576 directory
Device: 0,49    Inode: 368         Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2024-07-01 10:24:14.378313600 +0200
Modify: 2024-07-01 10:24:13.162303000 +0200
Change: 2024-07-01 10:24:13.162303000 +0200
 Birth: 2024-07-01 09:52:53.674533900 +0200

So, there is a directory and a file but i can't access it.


Code:
root@pve01:~# pveversion
pve-manager/8.1.10/4b06efb5db453f29 (running kernel: 6.5.13-3-pve)

Feels like I am overwhelming or killing the connection to the NAS. Is that possible? I am not sure where to start digging. Any advices?

Do you need any other infos? Please just tell and I will provide asap.
 
Last edited: