Hi there
I'm not sure this is know or is suppose to be, I'd say this is a bug. But earlier today I had a storage failure. I found out that the pool was still online but something went wrong and reading and writing is no longer possible (Input/output errors).
This meant that one container which was using the storage was also no longer available.
I wanted to get that container back up, so I went through the Proxmox selected a backup, changed the storage to a working pool and clicked restore.
However, instead of actually restoring the container, it deleted it because the task failed. Here is the log:
I assume it wanted to write some changes to the SSD storage pool (which was no longer available), crashed and then caused it to be removed from the Proxmox interface.
Obviously I manually restored it via CLI but it seems a bit odd that when it crashes it removes it from the interface. I would expect it to stop, or ignore the drive error on a restore and just restore it anyway on the other specified disk.
Is this really how it should be or is this a bug?
Regards
Sigfried
I'm not sure this is know or is suppose to be, I'd say this is a bug. But earlier today I had a storage failure. I found out that the pool was still online but something went wrong and reading and writing is no longer possible (Input/output errors).
This meant that one container which was using the storage was also no longer available.
I wanted to get that container back up, so I went through the Proxmox selected a backup, changed the storage to a working pool and clicked restore.
However, instead of actually restoring the container, it deleted it because the task failed. Here is the log:
Code:
recovering backed-up configuration from 'BACKUP-1:backup/vzdump-lxc-103-2021_06_20-01_19_28.tar.zst'
Failed to parse thin params: Error.
Failed to parse thin params: Error.
Failed to parse thin params: Error.
Failed to parse thin params: Error.
device-mapper: message ioctl on (253:4) failed: Input/output error
Failed to process message "delete 2".
Failed to suspend SSD/SSD with queued messages.
TASK ERROR: unable to restore CT 103 - lvremove 'SSD/vm-103-disk-0' error: Failed to update pool SSD/SSD.
I assume it wanted to write some changes to the SSD storage pool (which was no longer available), crashed and then caused it to be removed from the Proxmox interface.
Obviously I manually restored it via CLI but it seems a bit odd that when it crashes it removes it from the interface. I would expect it to stop, or ignore the drive error on a restore and just restore it anyway on the other specified disk.
Is this really how it should be or is this a bug?
Regards
Sigfried