I have one VM..there is win DC server and server is working correctly.
I`d like to change array in this node (ssd for sas) and array will be destroyed. There is one disk failed in current sas array (raid6 from 8 disks).
But...I can do nothing with this machine. I can`t backup, I can`t migrate, I can`t copy this file to another array.
Examples:
It seems the original qcow2 file in pve4 is corrupted. But windows server can run from this file.
Backup example.
or copy error
I think VM file must be corrupted but VM 127 runnig correctly.
Is here a way to fix it?
I`d like to change array in this node (ssd for sas) and array will be destroyed. There is one disk failed in current sas array (raid6 from 8 disks).
But...I can do nothing with this machine. I can`t backup, I can`t migrate, I can`t copy this file to another array.
Examples:
Of course there is not stale copy in pve2 but this script make 127 dir and 5MB vm-127-disk-0.qcow2 file in the pve2. If I deleted this dir and run script again there is the same result....5MB qcow2 file instead 32GB.2022-09-20 18:41:35 starting migration of VM 127 to node 'pve2' (192.168.3.78)
2022-09-20 18:41:36 found local disk 'backup:127/vm-127-disk-0.qcow2' (via storage)
2022-09-20 18:41:36 found local disk 'local:127/vm-127-disk-0.qcow2' (in current VM config)
2022-09-20 18:41:36 copying disk images
cannot import format raw+size into a file of format qcow2
send/receive failed, cleaning up snapshot(s)..
2022-09-20 18:41:37 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export backup:127/vm-127-disk-0.qcow2 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve2' root@192.168.3.78 -- pvesm import backup:127/vm-127-disk-0.qcow2 raw+size - -with-snapshots 0' failed: exit code 255
2022-09-20 18:41:37 aborting phase 1 - cleanup resources
2022-09-20 18:41:37 ERROR: found stale volume copy 'backup:127/vm-127-disk-0.qcow2' on node 'pve2'
2022-09-20 18:41:37 ERROR: migration aborted (duration 00:00:03): Failed to sync data - command 'set -o pipefail && pvesm export backup:127/vm-127-disk-0.qcow2 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve2' root@192.168.3.78 -- pvesm import backup:127/vm-127-disk-0.qcow2 raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted
It seems the original qcow2 file in pve4 is corrupted. But windows server can run from this file.
Backup example.
again only 5MB transferedINFO: starting new backup job: vzdump 127 --mode snapshot --storage backup --node pve4 --remove 0 --compress lzo
INFO: Starting Backup of VM 127 (qemu)
INFO: Backup started at 2022-09-20 18:46:06
INFO: status = running
INFO: update VM 127: -lock backup
INFO: VM Name: DC03
INFO: include disk 'ide0' 'local:127/vm-127-disk-0.qcow2' 32G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/backup/dump/vzdump-qemu-127-2022_09_20-18_46_06.vma.lzo'
INFO: started backup task 'cdb62a0c-1114-455e-a6eb-bf9cb6b7b0c4'
INFO: status: 0% (5898240/34359738368), sparse 0% (5193728), duration 1, read/write 5/0 MB/s
ERROR: job failed with err -5 - Input/output error
INFO: aborting backup job
ERROR: Backup of VM 127 failed - job failed with err -5 - Input/output error
INFO: Failed at 2022-09-20 18:46:09
INFO: Backup job finished with errors
TASK ERROR: job errors
or copy error
root@pve4:/var/lib/vz/images/127# cp vm-127-disk-0.qcow2 /ssd/images/127
cp: error reading 'vm-127-disk-0.qcow2': Input/output error
I think VM file must be corrupted but VM 127 runnig correctly.
Is here a way to fix it?
Last edited: