I think I screwed up the cluster. Below the details about my cluster storage.
pve-dellr330:
pve-dellr530:
So now if I need to migrate any VM or CT from node to node I get error about storage xxx is not available on the other node. If I set the storage local-lvm and dellr530-proxthin accessible from both nodes I get error like this
Is it anyway to fix this? Or after the cluster nodes created nothing can be changed?
Thanks.
pve-dellr330:
$lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- <63.38g 40.32 2.64
root pve -wi-ao---- 29.00g
swap pve -wi-ao---- 8.00g
...
...
pve-dellr530:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
dellr530-proxthin lvmthin twi-aotz-- 196.47g 54.27 2.27
root pve -wi-ao---- 9.75g
swap pve -wi-ao---- <4.88g
...
...
So now if I need to migrate any VM or CT from node to node I get error about storage xxx is not available on the other node. If I set the storage local-lvm and dellr530-proxthin accessible from both nodes I get error like this
()
2022-06-23 11:10:55 starting migration of CT 100 to node 'pve-dellr330' (192.168.0.141)
2022-06-23 11:10:55 found local volume 'dellr530-proxthin:vm-100-disk-0' (in current VM config)
2022-06-23 11:10:56 Volume group "lvmthin" not found
2022-06-23 11:10:56 Cannot process volume group lvmthin
2022-06-23 11:10:57 command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --config 'report/time_format="%s"' --options vg_name,lv_name,lv_size,lv_attr,pool_lv,data_percent,metadata_percent,snap_percent,uuid,tags,metadata_size,time lvmthin' failed: exit code 5
send/receive failed, cleaning up snapshot(s)..
2022-06-23 11:10:58 ERROR: storage migration for 'dellr530-proxthin:vm-100-disk-0' to storage 'dellr530-proxthin' failed - command 'set -o pipefail && pvesm export dellr530-proxthin:vm-100-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve-dellr330' root@192.168.0.141 -- pvesm import dellr530-proxthin:vm-100-disk-0 raw+size - -with-snapshots 0 -allow-rename 1' failed: exit code 5
2022-06-23 11:10:58 aborting phase 1 - cleanup resources
2022-06-23 11:10:58 ERROR: found stale volume copy 'dellr530-proxthin:vm-100-disk-0' on node 'pve-dellr330'
2022-06-23 11:10:58 start final cleanup
2022-06-23 11:10:58 ERROR: migration aborted (duration 00:00:03): storage migration for 'dellr530-proxthin:vm-100-disk-0' to storage 'dellr530-proxthin' failed - command 'set -o pipefail && pvesm export dellr530-proxthin:vm-100-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve-dellr330' root@192.168.0.141 -- pvesm import dellr530-proxthin:vm-100-disk-0 raw+size - -with-snapshots 0 -allow-rename 1' failed: exit code 5
TASK ERROR: migration aborted
Is it anyway to fix this? Or after the cluster nodes created nothing can be changed?
Thanks.