[SOLVED] Restoring from backup to new server, LXC works, VM can't boot

pcmofo

Well-Known Member
Feb 12, 2016
35
0
46
40
I have an old Proxmox server(v5.3-8) with a mirrored ZFS boot/VM storage running a few LXC and VMs. I moved to a totally new server which boots from a ext4 formatted single SSD and has a separate zfs mirrored pool for just the VMs.

To migrate everything I backed them all up to a seperate drive, mounted the drive in the new server, and then used Proxmox to restore them to the new ZFS mirrored vm pool in the new server.

So far so good, All the LXC boot up fine. None of the windows/linux VMs will boot.

"error: no such device: xxxxxx-xxxxx-xxxx-xxxx"

error: unknown file system, then a Grub prompt.

I checked the new vmpool. The VMs are there and mounted as ZFS datasets. They also show up under "storage" in the UI "disk images"

I assume something isn't linked right etc. Any idea what's going on?
 
Anyone have any suggestions on this? I tried creating a cluster and transfer between them but that doesn't work between v5 and v6. I tried editing various other VM settings, configs, re-backup and restoring, etc etc. Googling this issue just takes me to general solutions for when the Proxmox HOST has issues booting, not when the VM fails to boot. This is happening on both the Windows and Linux VMs so its's clearly a proxmox issue and not a guest issue.
 
please post the config of a VM which shows the error and your /etc/pve/storage.cfg
 
I deleted all my LXC and VMs and tried just to restore the single VM.
Here is what I get when the VM tries to start
eB0fdpG.png

/etc/pve/storage.cfg
Code:
dir: local
    path /var/lib/vz
    content iso,backup,vztmpl

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir

zfspool: vmpool
    pool vmpool/vm
    content rootdir,images
    mountpoint /vmpool/vm
    sparse 0

dir: vmbackup
    path /vmpool/vmbackup
    content backup
    maxfiles 1
    shared 0

zfspool: nvr
    pool nvr
    content images,rootdir
    mountpoint /nvr
    sparse 0

101.conf
Code:
balloon: 4096
bootdisk: ide0
cores: 6
ide0: vmpool:vm-101-disk-0,size=32G
ide2: none,media=cdrom
memory: 8192
name: NVR
net0: e1000=C6:ED:86:DB:72:5D,bridge=vmbr0
sata0: nvr:vm-101-disk-0,size=6T,backup=0
numa: 0
onboot: 1
ostype: win8
smbios1: uuid=64cea3f7-56a0-41d3-851d-7263b843b3be
sockets: 1
 
* hmm - why is the ostype set to win8 if it's a linux guest (guessing this from grub rescue)?
* what's the output of `ls` in the grub-rescue prompt?
* what's the output of `zfs get all vmpool/vm/vm-101-disk-0`
* what's the output of `zfs get all nvr/vm-101-disk-0`

* how did you back up the guests? (I assume vzdump? )
 
* hmm - why is the ostype set to win8 if it's a linux guest (guessing this from grub rescue)?
not sure but this is set the same on the working server and happens for the windows 10 VM as well
* what's the output of `ls` in the grub-rescue prompt?
(hd0)
* what's the output of `zfs get all vmpool/vm/vm-101-disk-0`
Code:
NAME                     PROPERTY              VALUE                  SOURCE
vmpool/vm/vm-101-disk-0  type                  volume                 -
vmpool/vm/vm-101-disk-0  creation              Wed Apr 22  9:46 2020  -
vmpool/vm/vm-101-disk-0  used                  33.0G                  -
vmpool/vm/vm-101-disk-0  available             292G                   -
vmpool/vm/vm-101-disk-0  referenced            32.0G                  -
vmpool/vm/vm-101-disk-0  compressratio         1.00x                  -
vmpool/vm/vm-101-disk-0  reservation           none                   default
vmpool/vm/vm-101-disk-0  volsize               32G                    local
vmpool/vm/vm-101-disk-0  volblocksize          8K                     default
vmpool/vm/vm-101-disk-0  checksum              on                     default
vmpool/vm/vm-101-disk-0  compression           off                    default
vmpool/vm/vm-101-disk-0  readonly              off                    default
vmpool/vm/vm-101-disk-0  createtxg             95318                  -
vmpool/vm/vm-101-disk-0  copies                1                      default
vmpool/vm/vm-101-disk-0  refreservation        33.0G                  local
vmpool/vm/vm-101-disk-0  guid                  4809347907636114356    -
vmpool/vm/vm-101-disk-0  primarycache          all                    default
vmpool/vm/vm-101-disk-0  secondarycache        all                    default
vmpool/vm/vm-101-disk-0  usedbysnapshots       0B                     -
vmpool/vm/vm-101-disk-0  usedbydataset         32.0G                  -
vmpool/vm/vm-101-disk-0  usedbychildren        0B                     -
vmpool/vm/vm-101-disk-0  usedbyrefreservation  982M                   -
vmpool/vm/vm-101-disk-0  logbias               latency                default
vmpool/vm/vm-101-disk-0  objsetid              903                    -
vmpool/vm/vm-101-disk-0  dedup                 off                    default
vmpool/vm/vm-101-disk-0  mlslabel              none                   default
vmpool/vm/vm-101-disk-0  sync                  standard               default
vmpool/vm/vm-101-disk-0  refcompressratio      1.00x                  -
vmpool/vm/vm-101-disk-0  written               32.0G                  -
vmpool/vm/vm-101-disk-0  logicalused           31.9G                  -
vmpool/vm/vm-101-disk-0  logicalreferenced     31.9G                  -
vmpool/vm/vm-101-disk-0  volmode               default                default
vmpool/vm/vm-101-disk-0  snapshot_limit        none                   default
vmpool/vm/vm-101-disk-0  snapshot_count        none                   default
vmpool/vm/vm-101-disk-0  snapdev               hidden                 default
vmpool/vm/vm-101-disk-0  context               none                   default
vmpool/vm/vm-101-disk-0  fscontext             none                   default
vmpool/vm/vm-101-disk-0  defcontext            none                   default
vmpool/vm/vm-101-disk-0  rootcontext           none                   default
vmpool/vm/vm-101-disk-0  redundant_metadata    all                    default
vmpool/vm/vm-101-disk-0  encryption            off                    default
vmpool/vm/vm-101-disk-0  keylocation           none                   default
vmpool/vm/vm-101-disk-0  keyformat             none                   default
vmpool/vm/vm-101-disk-0  pbkdf2iters           0                      default
* what's the output of `zfs get all nvr/vm-101-disk-0`
Code:
cannot open 'nvr/vm-101-disk-0': dataset does not exist
(looks like I wiped this out when I removed the VM as I did not detach the disk prior to removing the VM... I assume there is a way to recover this but for now I have removed it from the config so the VM still loads.
* how did you back up the guests? (I assume vzdump? )
Yes, backups were made with vzdump to a zfs volume mounted in the old server, then detached and mounted to the new server,a dded to proxmox, and used the gui to restore
 
Some updates. I updated the old server to the latest version of Proxmox 6, created a cluster on the old box and then added the new box to that cluster, hoping I could migrate across clusters.
When I ran the previously working (until this upgrade) Linux VM I got the same issue where the VM boots to GRUB.
When I ran the Windows 10 VM on the old box it now gives the following errors

Hyper-V paravirtualized IPI (hv-ipi) is not supported by kernel
kvm: kvm_init_vcpu failed: Function not implemented
TASK ERROR: start failed: QEMU exited with code 1

Not sure if this is related or not but it appears the underling issue is Proxmox v6 vs v5 update and not my specific VMs.
 
hmm - seems the partition table of the VM is gone - you can boot a linux-live cd and see what the contents of the disk show.
Thanks for replying. I've spent too many hours on this. I built new VMs and was able to attach the old VM HDD as a secondary HDD on the new VM. From there I am copying over the files I need.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!