Adding existing hdd from another vm

andi03

New Member
Jun 10, 2022
8
0
1
I had installed openmediavault on my pve and this vm was a little bit buggy, so I removed the vm, but the 3 disks are already exiting on another zfs pool.
Now I installed a new omv vm under a new VM ID and want to rename the existing disk to that new vm. After that I plan to reuse the disks by adding them back to omv vm.
How can I do that?
 
You can reassign the disk to the new VM via the web UI.

Imagine, your old omv VMID was 100, so the disks are named local-zfs:vm-100-disk-0
Your new omv VMID is 101.

You have to have (maybe create a tmp one) a VM with VMID 100.
Run qm rescan on the PVE node

Now the previous VM will show up in VM 100 as unused disk. The matching works simply be matching the VMID...
Select Disk Action > Reassign Disk in the menu and select the new omv VM.
 
thanks @stfl,
nice process, but that don't works for me. My VMID 121 has a little secret... This situation is ongoing since the last omv problem. I think it's not a omv case.

I created a new temp. vm ID121 (like before).
Error 1: Cant start the vm now
Code:
trying to acquire lock...
TASK ERROR: can't lock file '/var/lock/qemu-server/lock-121.conf' - got timeout
and
Code:
TASK ERROR: timeout waiting on systemd
When I restart the host, I have to hard power off the host, but later the VM comes up.

Error 2:
Can't stop the VM
Timeouts in the vm console

Code:
root@pm:~# pveversion
pve-manager/7.4-16/0f39f621 (running kernel: 5.15.116-1-pve)

I've installed two other vms. There are working fine.

Is there another solution for adding the disks back?
 
you don't need to start the VMs to manipute the disks.
run qm rescan on the host and look at the Hardware tab for the VM.

For your locking issue, please run the following and upload the file.

Code:
journalctl -b > /tmp/journal.log
 
@stfl I created a temp vm without any OS. The rescan added the four unused disks in the HW Tab of VMID121. When I now reassign the owner to 122 its stuck since two hours now.
My journal is attached.
 

Attachments

  • journal2.log
    14.1 KB · Views: 2
It looks like you're underlying ZFS storage hangs while trying to move the disk.
To get more detail on your system please post the results of the following:

Code:
pveversion -v

Code:
zpool status

Code:
zfs list

Code:
ls -l /dev/disk/by-id
 
@stfl Oh shit, something went wrong with my 500GB ssd
I can restore the data from backup. Is the ssd now broken, or can I repair it?
 

Attachments

  • pveinfo.txt
    9.4 KB · Views: 2
you are referring to the the zpool state of zfs_datastore, right?
something went wrong on your filesystem on the SSD, possibly due to the hard power-off of the host.
ZFS is trying to evaluate the situation.

Please also post the output of

Code:
zpool status -v
tail -n +1 /etc/pve/qemu-server/12[12].conf /etc/pve/storage.cfg
 
Also please the output of

Code:
ps auxwf

Please note that your WD Red HDD is an SMR disk, which is known to perform poorly with ZFS.

The scrub process on your SSD is running very slowly, which also indicates an issue with the SSD. Please check if there is a firmware update available for the SSD.
 
The other disks seem to have been reassigned successfully already.
The ZFS process trying to rename (reassign) zfs_datastore/vm-121-disk-0 is hanging.

If you have a backup, please restore that disk.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!