So, this disk was used by a VM running Ubuntu 20.04.03 server. It was not the primary disk, but a secondary.
It ran full and ever since, VM has shown as suspended on ProxMox console with Status: io-error label when hoovering on the QEMU-ID icon.
Proxmox is 7.1-7
From the console of the PVE node, I can confirm that the zfs pool is healty.
root@pve2:~# zpool status
pool: zblock01
state: ONLINE
scan: scrub repaired 0B in 01:04:27 with 0 errors on Sun Jul 10 01:28:28 2022
config:
NAME STATE READ WRITE CKSUM
zblock01 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdi ONLINE 0 0 0
errors: No known data errors
I am also able to list such device:
root@pve2:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zblock01 2.81T 0B 307K /zblock01
zblock01/vm-104-disk-0 2.81T 0B 2.81T -
But I am unable to mount it to recover the data.
I am able to stop the VM (qm stop <vm-id>) and then boot the VM from a mounted CD ISO and then list the SCSI devices.
Unfortunately I made the mistake to increase the SCSI size from the ProxMox GUI, so now, when I list the corresponding SCSI device I get:
root@ubuntu-server:/# fdisk /dev/sdb -l
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Disk /dev/sdb: 3.31 TiB, 3633542332416 bytes, 7096762368 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2ADFAF5E-2D1C-4C80-BAEC-38355D586CFA
Device Start End Sectors Size Type
/dev/sdb1 2048 6291453951 6291451904 3T Linux filesystem
Now, when I try to mount it with just mount /dev/sdb1 /mnt the system hangs
Any help on how I can recover those 2.8TB of data?
Thanks
It ran full and ever since, VM has shown as suspended on ProxMox console with Status: io-error label when hoovering on the QEMU-ID icon.
Proxmox is 7.1-7
From the console of the PVE node, I can confirm that the zfs pool is healty.
root@pve2:~# zpool status
pool: zblock01
state: ONLINE
scan: scrub repaired 0B in 01:04:27 with 0 errors on Sun Jul 10 01:28:28 2022
config:
NAME STATE READ WRITE CKSUM
zblock01 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdi ONLINE 0 0 0
errors: No known data errors
I am also able to list such device:
root@pve2:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zblock01 2.81T 0B 307K /zblock01
zblock01/vm-104-disk-0 2.81T 0B 2.81T -
But I am unable to mount it to recover the data.
I am able to stop the VM (qm stop <vm-id>) and then boot the VM from a mounted CD ISO and then list the SCSI devices.
Unfortunately I made the mistake to increase the SCSI size from the ProxMox GUI, so now, when I list the corresponding SCSI device I get:
root@ubuntu-server:/# fdisk /dev/sdb -l
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Disk /dev/sdb: 3.31 TiB, 3633542332416 bytes, 7096762368 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2ADFAF5E-2D1C-4C80-BAEC-38355D586CFA
Device Start End Sectors Size Type
/dev/sdb1 2048 6291453951 6291451904 3T Linux filesystem
Now, when I try to mount it with just mount /dev/sdb1 /mnt the system hangs
Any help on how I can recover those 2.8TB of data?
Thanks