Hoping for some help with this, very new to proxmox and trying to rapidly learn more about the platform to evaluate deploying it across a few clusters.
Right now I have a test cluster setup at my desk connected via ISCSI to a Synology LUN. I attempted to test migrating a VM from one host to another for the first time and it failed because I had snapshots on it. Ok, no problem I was testing those on this VM. I had a snapshot called Test8 and one called Test9.
I selected Test8 and told it to delete, the output provided the below error. The end result is my VM now thinks Test8 still exists, but cannot do any further snapshot related operations because the test8 qcow2 disk no longer exists. Would love some advice!
(If it matters, Snapshot 9 had the running memory saved while 8 did not)
delete qemu external snapshot
delete first snapshot Test8
block-commit Test9 to base:Test8
commit-drive-scsi0: transferred 4.2 MiB of 100.0 GiB (0.00%) in 0s
commit-drive-scsi0: transferred 3.3 GiB of 100.0 GiB (3.34%) in 1s
commit-drive-scsi0: transferred 22.3 GiB of 100.0 GiB (22.32%) in 2s
commit-drive-scsi0: transferred 49.3 GiB of 100.0 GiB (49.34%) in 3s
commit-drive-scsi0: commit-job finished
delete old /dev/Synology-VG/snap_vm-100-disk-0_Test9.qcow2
Logical volume "snap_vm-100-disk-0_Test9.qcow2" successfully removed.
Renamed "snap_vm-100-disk-0_Test8.qcow2" to "snap_vm-100-disk-0_Test9.qcow2" in volume group "Synology-VG"
blockdev replace Test8 by Test9
delete qemu external snapshot
delete first snapshot Test8
block-commit Test9 to base:Test8
commit-drive-efidisk0: transferred 0.0 B of 528.0 KiB (0.00%) in 0s
commit-drive-efidisk0: commit-job finished
delete old /dev/Synology-VG/snap_vm-100-disk-1_Test9.qcow2
Logical volume "snap_vm-100-disk-1_Test9.qcow2" successfully removed.
Renamed "snap_vm-100-disk-1_Test8.qcow2" to "snap_vm-100-disk-1_Test9.qcow2" in volume group "Synology-VG"
blockdev replace Test8 by Test9
TASK ERROR: VM 100 qmp command 'blockdev-reopen' failed - Cannot change the option 'aio'
Right now I have a test cluster setup at my desk connected via ISCSI to a Synology LUN. I attempted to test migrating a VM from one host to another for the first time and it failed because I had snapshots on it. Ok, no problem I was testing those on this VM. I had a snapshot called Test8 and one called Test9.
I selected Test8 and told it to delete, the output provided the below error. The end result is my VM now thinks Test8 still exists, but cannot do any further snapshot related operations because the test8 qcow2 disk no longer exists. Would love some advice!
(If it matters, Snapshot 9 had the running memory saved while 8 did not)
delete qemu external snapshot
delete first snapshot Test8
block-commit Test9 to base:Test8
commit-drive-scsi0: transferred 4.2 MiB of 100.0 GiB (0.00%) in 0s
commit-drive-scsi0: transferred 3.3 GiB of 100.0 GiB (3.34%) in 1s
commit-drive-scsi0: transferred 22.3 GiB of 100.0 GiB (22.32%) in 2s
commit-drive-scsi0: transferred 49.3 GiB of 100.0 GiB (49.34%) in 3s
commit-drive-scsi0: commit-job finished
delete old /dev/Synology-VG/snap_vm-100-disk-0_Test9.qcow2
Logical volume "snap_vm-100-disk-0_Test9.qcow2" successfully removed.
Renamed "snap_vm-100-disk-0_Test8.qcow2" to "snap_vm-100-disk-0_Test9.qcow2" in volume group "Synology-VG"
blockdev replace Test8 by Test9
delete qemu external snapshot
delete first snapshot Test8
block-commit Test9 to base:Test8
commit-drive-efidisk0: transferred 0.0 B of 528.0 KiB (0.00%) in 0s
commit-drive-efidisk0: commit-job finished
delete old /dev/Synology-VG/snap_vm-100-disk-1_Test9.qcow2
Logical volume "snap_vm-100-disk-1_Test9.qcow2" successfully removed.
Renamed "snap_vm-100-disk-1_Test8.qcow2" to "snap_vm-100-disk-1_Test9.qcow2" in volume group "Synology-VG"
blockdev replace Test8 by Test9
TASK ERROR: VM 100 qmp command 'blockdev-reopen' failed - Cannot change the option 'aio'