Cannot start or delete VM

johnwhite_ca

New Member
Dec 16, 2021
7
0
1
I'm relatively new to Proxmox and I'm getting ready to migrate production services to new Windows servers on 2 clustered Proxmox nodes (pve01 and pve02) both running VE 7.0-8.

I first installed Proxmox on 2 new servers in June, 2021 and created this cluster a couple months ago, and I haven't really had any issues until now. It turns out that I now have to either delete or reformat 2 VMs (700, 750), 1 on each node, and I am unable to do so. When I try to either delete or start the VMs, I get errors that I haven't been able to work around, which I've posted below.

To put these errors into context, pve01 contains local data store 'datastore01' and pve02 contains local data store 'datastore02'. Currently, pve01 cannot read the datastore on pve02 and vice versa, but not by design. I was certain I had this working originally, but I may be wrong about that. Regardless, all VMs are using either local or shared storage (on TrueNAS devices), and no VMs from one node are pointing to the datastore on the other node.

ERRORS:
NODE pve01, VM 700 - Destroy -- TASK ERROR: could not activate storage 'datastore02', zfs error: cannot import 'datastore02': no such pool available
NODE pve02, VM 750 - Destroy -- TASK ERROR: could not activate storage 'datastore01', zfs error: cannot import 'datastore01': no such pool available
NODE pve01, VM 700 - Start -- TASK ERROR: timeout: no zvol device link for 'vm-700-disk-0' found after 300 sec found.
NODE pve02, VM 750 - Start -- TASK ERROR: timeout: no zvol device link for 'vm-750-disk-0' found after 300 sec found.

I've also had other errors about locked VMs, which I tried to unlock with 'qm unlock' but it didn't make a difference.

- added after initial post -
Node pve01 has 2 containers (150, 151) and 4 VMs (100, 101, 500, 700). I've restarted pve01 twice, and all containers & VMs can start (and are currently running) without issue. I'm only having a problem with 700 on pve01 at this time.

Node pve02 has 2 containers (250, 251) and 5 VMs (200, 201, 202, 300, 750). I haven't been able to restart pve02 because I have to keep one important VM up and running, but all VMS & containers on pve02 can start (and are currently running) except 300 & 750.

On pve01, datastore02 has a question mark beside it, and on pve02, datastore01 has a question mark beside it. I am unable to migrate VMs between these nodes.

Any help is greatly appreciated.
 
Last edited:
can you post the vm configs (qm config ID) as well as the storage config (/etc/pve/storage.cfg) please ?
 
can you post the vm configs (qm config ID) as well as the storage config (/etc/pve/storage.cfg) please ?
--- PVE01 ---
root@pve01:~# qm config 700
boot: order=ide0;net0
cores: 2
ide0: datastore01:vm-700-disk-0,cache=writeback,size=100G
machine: pc-i440fx-6.0
memory: 8192
name: TMPDC2012
net0: virtio=56:4E:84:EC:63:4D,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: win8
scsihw: virtio-scsi-pci
smbios1: uuid=677cd2bc-bec9-4152-9ad4-4ca00fa25f83
sockets: 2
vmgenid: acac856d-76d3-4b61-95b8-a048d3f426fe


/etc/pve/storage.cfg

dir: local
path /var/lib/vz
content backup,vztmpl,iso

zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1

zfspool: datastore01
pool datastore01
content images,rootdir
mountpoint /datastore01
nodes pve01,pve02
sparse 0

nfs: ISOs
export /mnt/tank/ISOs
path /mnt/pve/ISOs
server 192.168.0.210
content vztmpl,iso
prune-backups keep-all=1

nfs: VM_Backups
export /mnt/tank/VM_Backups
path /mnt/pve/VM_Backups
server 192.168.0.210
content backup
prune-backups keep-all=1

iscsi: tn-exchvol01
portal 192.168.0.210
target iqn.2021-08.org.truenas.ctl:tn-exchvol01
content images

iscsi: tn-sqlvol01
portal 192.168.0.210
target iqn.2021-08.org.truenas.ctl:tn-sqlvol01
content images

iscsi: tn-miscdbvol01
portal 192.168.0.210
target iqn.2021-08.org.truenas.ctl:tn-miscdb01
content images

iscsi: tn-esetvol
portal 192.168.0.210
target iqn.2021-08.org.truenas.ctl:tn-esetvol
content images

zfspool: datastore02
pool datastore02
content images,rootdir
mountpoint /datastore02
sparse 0


--- PVE02 ---

root@pve02:~# qm config 750
agent: 1
boot: order=ide0
cores: 2
ide0: datastore02:vm-750-disk-0,cache=writeback,size=100G
ide1: datastore02:vm-750-disk-1,cache=writeback,size=500G
ide2: ISOs:iso/SW_DVD9_Windows_Svr_Std_and_DataCtr_2012_R2_64Bit_English_-4_MLF_X19-82891.ISO,media=cdrom,size=5273550K
machine: pc-i440fx-6.0
memory: 16384
name: TMPMX2012
net0: virtio=8E:71:E1:71:7C:50,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: win8
scsihw: virtio-scsi-pci
smbios1: uuid=d995b592-2c2c-4c3b-b1be-4cb10d3f70a3
sockets: 4
vmgenid: 5126e370-1885-4afb-9e8d-0955ebfd717e


/etc/pve/storage.cfg

dir: local
path /var/lib/vz
content backup,vztmpl,iso

zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1

zfspool: datastore01
pool datastore01
content images,rootdir
mountpoint /datastore01
nodes pve01,pve02
sparse 0

nfs: ISOs
export /mnt/tank/ISOs
path /mnt/pve/ISOs
server 192.168.0.210
content vztmpl,iso
prune-backups keep-all=1

nfs: VM_Backups
export /mnt/tank/VM_Backups
path /mnt/pve/VM_Backups
server 192.168.0.210
content backup
prune-backups keep-all=1

iscsi: tn-exchvol01
portal 192.168.0.210
target iqn.2021-08.org.truenas.ctl:tn-exchvol01
content images

iscsi: tn-sqlvol01
portal 192.168.0.210
target iqn.2021-08.org.truenas.ctl:tn-sqlvol01
content images

iscsi: tn-miscdbvol01
portal 192.168.0.210
target iqn.2021-08.org.truenas.ctl:tn-miscdb01
content images

iscsi: tn-esetvol
portal 192.168.0.210
target iqn.2021-08.org.truenas.ctl:tn-esetvol
content images

zfspool: datastore02
pool datastore02
content images,rootdir
mountpoint /datastore02
sparse 0
 
One thing I didn't realize until end of day yesterday after further research, is that the storage pools for both nodes need to be named the same for migration purposes. Not sure how this affects local storage and tasks for VMs 700 & 750, but I'm definitely going to want the ability to migrate servers between the two nodes.

At this time, I don't know if the naming of the storage pools is causing all the issues, or if I have two separate issues; 1 affecting migration and 1 affecting the starting or deletion of select VMs. If the latter is the case, then I need to address the local starting/deletion issue first and foremost.
 
Last edited:
ok the error on destruction comes from your storage configuration. we try to activate all storages available on the node. you configured both datastore01 and datastore02 to be available on both nodes, but thats probably not true. as a solution for this you can limit which storage is available on which nodes on the gui : Datacenter -> storage -> edit -> select the nodes where it's available
 
One thing I didn't realize until end of day yesterday after further research, is that the storage pools for both nodes need to be named the same for migration purposes. Not sure how this affects local storage and tasks for VMs 700 & 750, but I'm definitely going to want the ability to migrate servers between the two nodes.

At this time, I don't know if the naming of the storage pools is causing all the issues, or if I have two separate issues; 1 affecting migration and 1 affecting the starting or deletion of select VMs. If the latter is the case, then I need to address the local starting/deletion issue first and foremost.

ok the error on destruction comes from your storage configuration. we try to activate all storages available on the node. you configured both datastore01 and datastore02 to be available on both nodes, but thats probably not true. as a solution for this you can limit which storage is available on which nodes on the gui : Datacenter -> storage -> edit -> select the nodes where it's available

Well that was easy! Consider this to be resolved.

Thank you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!