I'm relatively new to Proxmox and I'm getting ready to migrate production services to new Windows servers on 2 clustered Proxmox nodes (pve01 and pve02) both running VE 7.0-8.
I first installed Proxmox on 2 new servers in June, 2021 and created this cluster a couple months ago, and I haven't really had any issues until now. It turns out that I now have to either delete or reformat 2 VMs (700, 750), 1 on each node, and I am unable to do so. When I try to either delete or start the VMs, I get errors that I haven't been able to work around, which I've posted below.
To put these errors into context, pve01 contains local data store 'datastore01' and pve02 contains local data store 'datastore02'. Currently, pve01 cannot read the datastore on pve02 and vice versa, but not by design. I was certain I had this working originally, but I may be wrong about that. Regardless, all VMs are using either local or shared storage (on TrueNAS devices), and no VMs from one node are pointing to the datastore on the other node.
ERRORS:
NODE pve01, VM 700 - Destroy -- TASK ERROR: could not activate storage 'datastore02', zfs error: cannot import 'datastore02': no such pool available
NODE pve02, VM 750 - Destroy -- TASK ERROR: could not activate storage 'datastore01', zfs error: cannot import 'datastore01': no such pool available
NODE pve01, VM 700 - Start -- TASK ERROR: timeout: no zvol device link for 'vm-700-disk-0' found after 300 sec found.
NODE pve02, VM 750 - Start -- TASK ERROR: timeout: no zvol device link for 'vm-750-disk-0' found after 300 sec found.
I've also had other errors about locked VMs, which I tried to unlock with 'qm unlock' but it didn't make a difference.
- added after initial post -
Node pve01 has 2 containers (150, 151) and 4 VMs (100, 101, 500, 700). I've restarted pve01 twice, and all containers & VMs can start (and are currently running) without issue. I'm only having a problem with 700 on pve01 at this time.
Node pve02 has 2 containers (250, 251) and 5 VMs (200, 201, 202, 300, 750). I haven't been able to restart pve02 because I have to keep one important VM up and running, but all VMS & containers on pve02 can start (and are currently running) except 300 & 750.
On pve01, datastore02 has a question mark beside it, and on pve02, datastore01 has a question mark beside it. I am unable to migrate VMs between these nodes.
Any help is greatly appreciated.
I first installed Proxmox on 2 new servers in June, 2021 and created this cluster a couple months ago, and I haven't really had any issues until now. It turns out that I now have to either delete or reformat 2 VMs (700, 750), 1 on each node, and I am unable to do so. When I try to either delete or start the VMs, I get errors that I haven't been able to work around, which I've posted below.
To put these errors into context, pve01 contains local data store 'datastore01' and pve02 contains local data store 'datastore02'. Currently, pve01 cannot read the datastore on pve02 and vice versa, but not by design. I was certain I had this working originally, but I may be wrong about that. Regardless, all VMs are using either local or shared storage (on TrueNAS devices), and no VMs from one node are pointing to the datastore on the other node.
ERRORS:
NODE pve01, VM 700 - Destroy -- TASK ERROR: could not activate storage 'datastore02', zfs error: cannot import 'datastore02': no such pool available
NODE pve02, VM 750 - Destroy -- TASK ERROR: could not activate storage 'datastore01', zfs error: cannot import 'datastore01': no such pool available
NODE pve01, VM 700 - Start -- TASK ERROR: timeout: no zvol device link for 'vm-700-disk-0' found after 300 sec found.
NODE pve02, VM 750 - Start -- TASK ERROR: timeout: no zvol device link for 'vm-750-disk-0' found after 300 sec found.
I've also had other errors about locked VMs, which I tried to unlock with 'qm unlock' but it didn't make a difference.
- added after initial post -
Node pve01 has 2 containers (150, 151) and 4 VMs (100, 101, 500, 700). I've restarted pve01 twice, and all containers & VMs can start (and are currently running) without issue. I'm only having a problem with 700 on pve01 at this time.
Node pve02 has 2 containers (250, 251) and 5 VMs (200, 201, 202, 300, 750). I haven't been able to restart pve02 because I have to keep one important VM up and running, but all VMS & containers on pve02 can start (and are currently running) except 300 & 750.
On pve01, datastore02 has a question mark beside it, and on pve02, datastore01 has a question mark beside it. I am unable to migrate VMs between these nodes.
Any help is greatly appreciated.
Last edited: