[SOLVED] How to recovery files in VM-Disk on zfs pool

tedxuyuan

Active Member
Jun 24, 2018
8
1
43
41
Hi,
Please help!
My proxmox can't boot, says no such device: xxxxxxxxxx
I tried lot of things to repair this issues, but i failed.

Now, the import thing is to recovery the files in my vm, but with zfs pool, I can't see the files in VM-Disk (in my case is vm-100-disk-1).

How to mount this vm-101-disk-1? I am fresh with block storage, please help!

Code:
root@ubuntu:/# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  3.62T  1.43T  2.20T         -     9%    39%  1.00x  ONLINE  /mmm

Code:
root@ubuntu:/# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 6h12m with 0 errors on Mon Feb 25 00:04:56 2019
config:

    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sdc2    ONLINE       0     0     0
        sdb2    ONLINE       0     0     0
    logs
      sda1      ONLINE       0     0     0
    cache
      sda2      ONLINE       0     0     0

errors: No known data errors

Code:
root@ubuntu:/# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                     1.43T  2.08T    96K  /mmm/rpool
rpool/ROOT                11.3G  2.08T    96K  /mmm/rpool/ROOT
rpool/ROOT/pve-1          11.3G  2.08T  11.3G  /mmm
rpool/data                1.41T  2.08T    96K  /mmm/rpool/data
rpool/data/vm-100-disk-0  13.7G  2.08T  13.7G  -
rpool/data/vm-100-disk-1   617G  2.08T   617G  -
rpool/data/vm-100-disk-2   806G  2.08T   806G  -
rpool/data/vm-101-disk-0  11.6G  2.08T  11.6G  -
rpool/data/vm-101-disk-1    88K  2.08T    88K  -
rpool/swap                8.50G  2.08T  2.41G  -

Code:
root@ubuntu:/# zfs get all rpool
NAME   PROPERTY              VALUE                  SOURCE
rpool  type                  filesystem             -
rpool  creation              Sun Nov 25 11:51 2018  -
rpool  used                  1.43T                  -
rpool  available             2.08T                  -
rpool  referenced            96K                    -
rpool  compressratio         1.11x                  -
rpool  mounted               yes                    -
rpool  quota                 none                   default
rpool  reservation           none                   default
rpool  recordsize            128K                   default
rpool  mountpoint            /mmm/rpool             default
rpool  sharenfs              off                    default
rpool  checksum              on                     default
rpool  compression           on                     local
rpool  atime                 off                    local
rpool  devices               on                     default
rpool  exec                  on                     default
rpool  setuid                on                     default
rpool  readonly              off                    default
rpool  zoned                 off                    default
rpool  snapdir               hidden                 default
rpool  aclinherit            restricted             default
rpool  createtxg             1                      -
rpool  canmount              on                     default
rpool  xattr                 sa                     local
rpool  copies                1                      default
rpool  version               5                      -
rpool  utf8only              off                    -
rpool  normalization         none                   -
rpool  casesensitivity       sensitive              -
rpool  vscan                 off                    default
rpool  nbmand                off                    default
rpool  sharesmb              off                    default
rpool  refquota              none                   default
rpool  refreservation        none                   default
rpool  guid                  7271136817003582855    -
rpool  primarycache          all                    default
rpool  secondarycache        all                    default
rpool  usedbysnapshots       0B                     -
rpool  usedbydataset         96K                    -
rpool  usedbychildren        1.43T                  -
rpool  usedbyrefreservation  0B                     -
rpool  logbias               latency                default
rpool  dedup                 off                    default
rpool  mlslabel              none                   default
rpool  sync                  standard               local
rpool  dnodesize             auto                   local
rpool  refcompressratio      1.00x                  -
rpool  written               96K                    -
rpool  logicalused           1.59T                  -
rpool  logicalreferenced     40K                    -
rpool  volmode               default                default
rpool  filesystem_limit      none                   default
rpool  snapshot_limit        none                   default
rpool  filesystem_count      none                   default
rpool  snapshot_count        none                   default
rpool  snapdev               hidden                 default
rpool  acltype               off                    default
rpool  context               none                   default
rpool  fscontext             none                   default
rpool  defcontext            none                   default
rpool  rootcontext           none                   default
rpool  relatime              off                    default
rpool  redundant_metadata    all                    default
rpool  overlay               off                    default
 
Hi, each zvol (eg. rpool/data/vm-100-disk-0) contains everything your VM has. Including MBR, martitions, etc, so you can not mount it directly.

You can try using loopback method like:
Code:
mount -o loop,offset=32256 rpool/data/vm-100-disk-0 /mnt/vm-100-disk-0-part0
If the offset is correct, it will mount the first partition.

But it might not work, in that case, i would try something like this:
Code:
partprobe  rpool/data/vm-100-disk-0
or
partprobe  /dev/rpool/data/vm-100-disk-0
and if new partitons show up, mount those:
Code:
mount /dev/rpool/data/vm-100-disk-0..partnumber


But in any case it should work with qemu-nbd, like so:
Code:
modprobe nbd max_part=16
qemu-nbd -c /dev/nbd0 rpool/data/vm-100-disk-0 (might have to be /dev/rpool/..-0)
mount -o loop /dev/nbd0p1 /mnt/partition1
# do stuff
umount /mnt
qemu-nbd -d /dev/nbd0
rmmod nbd
 
Thanks a lot.

I can access the files now.

# mount -o loop, offset=576716800 /dev/rpool/data/vm-100-disk-0 /mnt/vm-100-disk-0-part2
is not run currect.

I paste my code below, hope useful. Thanks again.

Code:
root@ubuntu:/# fdisk -lu rpool/data/vm-100-disk-0

fdisk: cannot open rpool/data/vm-100-disk-0: No such file or directory
root@ubuntu:/# fdisk -lu /dev/rpool/data/vm-100-disk-0
Disk /dev/rpool/data/vm-100-disk-0: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x2e6769f0

Device                          Boot   Start       End   Sectors   Size Id Type
/dev/rpool/data/vm-100-disk-0p1 *       2048   1126399   1124352   549M  7 HPFS/NTFS/exFAT
/dev/rpool/data/vm-100-disk-0p2      1126400 419428351 418301952 199.5G  7 HPFS/NTFS/exFAT

root@ubuntu:/# mount -o loop, offset=576716800 /dev/rpool/data/vm-100-disk-0 /mnt/vm-100-disk-0-part2
mount: bad usage
Try 'mount --help' for more information.

Code:
Display loop devices.
# losetup --list

Get the partition layout of the image
# fdisk -lu /dev/rpool/data/vm-100-disk-0

Calculate the offset from the start of the image to the partition start
Sector size * Start = (in the case) 512 * 1126400 = 576716800

Mount it on /dev/loop99 using the offset
# losetup -o 576716800 /dev/loop99 /dev/rpool/data/vm-100-disk-0
# mkdir /mnt2
# mkdir /mnt2/vm-100-disk-0p2
# mount /dev/loop99 /mnt2/vm-100-disk-0p2
 
  • Like
Reactions: BeWu
It's a lot easier to just use kpartx to activate the partitions and mount them directly. Internally it uses all the commands you mentioned but automatically:

Code:
$ kpartx -av /dev/zvol/zpool/proxmox/vm-108-disk-0
add map vm-108-disk-0p1 (253:0): 0 2014 linear 230:0 34
add map vm-108-disk-0p2 (253:1): 0 1048576 linear 230:0 2048
add map vm-108-disk-0p3 (253:2): 0 66058207 linear 230:0 1050624

$ mount /dev/mapper/vm-108-disk-0p2 /mnt

$ df -h /mnt
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/vm-108-disk-0p2  511M  304K  511M   1% /mnt

$ umount /mnt
$ kpartx -dv /dev/zvol/zpool/proxmox/vm-108-disk-0
del devmap : vm-108-disk-0p3
del devmap : vm-108-disk-0p2
del devmap : vm-108-disk-0p1
 
@LnxBil
Is there some commands or tools can just reinstall proxmox in the old harddisks that with zfs raid pool?
When a proxmox system down, without hardware error and zfs pool error, I think reinstall proxmox is a faster way to back online. But when proxmox is installed on a zfs pool as root, I don't know how to do it. Proxmox installer auto rebuild rpool cover the old rpool data, right?
Hope a tutorial,thanks.
 
@LnxBil
Is there some commands or tools can just reinstall proxmox in the old harddisks that with zfs raid pool?
When a proxmox system down, without hardware error and zfs pool error, I think reinstall proxmox is a faster way to back online. But when proxmox is installed on a zfs pool as root, I don't know how to do it. Proxmox installer auto rebuild rpool cover the old rpool data, right?
Hope a tutorial,thanks.

The easiest way is to create continuously snapshots of your PVE install itself, then you'll be able to just roll back to a previous state. This is recommended either way and I'd do it before every upgrade. You can use this to rollback to a previous state or extract files from the snapshot if needed.
Normally, you can recover from almost 99,99% of all errors a user made, so that a reinstall is only necessary if you want to change some storage releated stuff that cannot be changed without data loss.
 
The easiest way is to create continuously snapshots of your PVE install itself, then you'll be able to just roll back to a previous state. This is recommended either way and I'd do it before every upgrade. You can use this to rollback to a previous state or extract files from the snapshot if needed.
Normally, you can recover from almost 99,99% of all errors a user made, so that a reinstall is only necessary if you want to change some storage releated stuff that cannot be changed without data loss.
LnxBil is right, snapshots make a terrible situation a non-issue through rollback. I consider this tool mandatory on all ZFS systems:
https://github.com/zfsonlinux/zfs-auto-snapshot

If you have snapshots and end up with a system that won't boot, you can use a ZFS enabled rescue CD to do the rollback. You can get one here (or use any distro's live CD and install ZFS):
http://list.zfsonlinux.org/zfs-iso/
 
  • Like
Reactions: ebiss
If you have snapshots and end up with a system that won't boot, you can use a ZFS enabled rescue CD to do the rollback. You can get one here (or use any distro's live CD and install ZFS):
http://list.zfsonlinux.org/zfs-iso/

Yes, that is one way to go and I also created myself a Debian-based live image (via livecd) for exactly this purpose.
Another way are the ZFS init scripts that also include support for booting up from snapshots, so that if you boot a snapshot, a new filesystem is cloned (snapshots are read-only) and booted from. You have to enable this manually and create the grub entries by yourself but it is possible.
 
Thanks a lot.

I can access the files now.

# mount -o loop, offset=576716800 /dev/rpool/data/vm-100-disk-0 /mnt/vm-100-disk-0-part2
is not run currect.

I paste my code below, hope useful. Thanks again.

Code:
root@ubuntu:/# fdisk -lu rpool/data/vm-100-disk-0

fdisk: cannot open rpool/data/vm-100-disk-0: No such file or directory
root@ubuntu:/# fdisk -lu /dev/rpool/data/vm-100-disk-0
Disk /dev/rpool/data/vm-100-disk-0: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x2e6769f0

Device                          Boot   Start       End   Sectors   Size Id Type
/dev/rpool/data/vm-100-disk-0p1 *       2048   1126399   1124352   549M  7 HPFS/NTFS/exFAT
/dev/rpool/data/vm-100-disk-0p2      1126400 419428351 418301952 199.5G  7 HPFS/NTFS/exFAT

root@ubuntu:/# mount -o loop, offset=576716800 /dev/rpool/data/vm-100-disk-0 /mnt/vm-100-disk-0-part2
mount: bad usage
Try 'mount --help' for more information.

Code:
Display loop devices.
# losetup --list

Get the partition layout of the image
# fdisk -lu /dev/rpool/data/vm-100-disk-0

Calculate the offset from the start of the image to the partition start
Sector size * Start = (in the case) 512 * 1126400 = 576716800

Mount it on /dev/loop99 using the offset
# losetup -o 576716800 /dev/loop99 /dev/rpool/data/vm-100-disk-0
# mkdir /mnt2
# mkdir /mnt2/vm-100-disk-0p2
# mount /dev/loop99 /mnt2/vm-100-disk-0p2
Heloo, I have question, where did you get theese offset numbers? "offset=576716800"
 
  • Like
Reactions: novafreak69

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!