Hello,
ProxMox VE version 5.3-11
I have two local raid arrays on my server. Each one is raid 1 (mirror) and each is its own volume group (pve, and vmdata). I had created a vm on the pve set of disks, but decided that I needed to move it to the vmdata set of disks, so I shut down the vm and performed a storage migration from the gui. All seems well. However now it appears that the vm has created two separate lvm thin volumes.
root@pve1-gkh8ww1:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 75.87g 0.00 0.04
root pve -wi-ao---- 33.75g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 vmdata Vwi-a-tz-- 32.00g vmstore 4.29
vm-100-disk-1 vmdata Vwi-a-tz-- 32.00g vmstore 6.09
vmstore vmdata twi-aotz-- 80.00g 4.15 2.54
Does anyone know why there would be vm-100-disk-{0,1} after performing a storage migration? I would like to delete the unused logical volume but I can't tell which one is the unused one. I am not sure if anyone else has run into this before.
ProxMox VE version 5.3-11
I have two local raid arrays on my server. Each one is raid 1 (mirror) and each is its own volume group (pve, and vmdata). I had created a vm on the pve set of disks, but decided that I needed to move it to the vmdata set of disks, so I shut down the vm and performed a storage migration from the gui. All seems well. However now it appears that the vm has created two separate lvm thin volumes.
root@pve1-gkh8ww1:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 75.87g 0.00 0.04
root pve -wi-ao---- 33.75g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 vmdata Vwi-a-tz-- 32.00g vmstore 4.29
vm-100-disk-1 vmdata Vwi-a-tz-- 32.00g vmstore 6.09
vmstore vmdata twi-aotz-- 80.00g 4.15 2.54
Does anyone know why there would be vm-100-disk-{0,1} after performing a storage migration? I would like to delete the unused logical volume but I can't tell which one is the unused one. I am not sure if anyone else has run into this before.