[SOLVED] Restore vm after new installation proxmox

wxipn

Member
Jun 2, 2019
19
0
21
23
Hi,
i have a new installation of proxmox, il would to put mi qcow2 in /var/lib/images/100
delete my hard disk od vm, but i can
qm importdisk 100 /var/lib/vz/images/100/vm-100-disk-0.qcow2 local

Message:
Code:
importing disk '/var/lib/vz/images/100/vm-100-disk-0.qcow2' to VM 100 ...
Formatting '/var/lib/vz/images/100/vm-100-disk-1.raw', fmt=raw size=52177043456
transferred: 0 bytes remaining: 52177043456 bytes total: 52177043456 bytes progression: 0.00 %
qemu-img: Could not open '/var/lib/vz/images/100/vm-100-disk-0.qcow2': Image is not in qcow2 format
copy failed: command '/usr/bin/qemu-img convert -p -n -f qcow2 -O raw /var/lib/vz/images/100/vm-100-disk-0.qcow2 zeroinit:/var/lib/vz/images/100/vm-100-disk-1.raw' failed: exit code 1
Code:
qemu-img info /var/lib/vz/images/100/vm-100-disk-0.qcow2
image: /var/lib/vz/images/100/vm-100-disk-0.qcow2
file format: raw
virtual size: 48.6 GiB (52177043456 bytes)
disk size: 48.6 GiB
 
Last edited:
When i want to start it :
Code:
kvm: -drive file=/var/lib/vz/images/100/vm-100-disk-0.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on: Image is not in qcow2 format
TASK ERROR: start failed: QEMU exited with code 1
 
I have make that:
qm importdisk 100 /var/lib/vz/images/100/vm-100-disk-0.raw local
The system create vm-100-disk-1.raw on the same place, but i boot on ipXE :(
 
In my /etc/pve/storage.cfg i have only :
Code:
dir: local
    path /var/lib/vz
    content images,vztmpl,snippets,iso,rootdir
    maxfiles 0

dir: Sauvegarde
    path /dump
    content backup
    maxfiles 3
    nodes ns3368862
    shared 0
 
The system create vm-100-disk-1.raw on the same place, but i boot on ipXE :(

Edit your VM Options on the webinterface, there is a "Boot Order" option, edit that and set the new imported disk as first option.
 
The order is ok Disk 'scsi0' in first cd-rom and network.
If i have lose my job :'(
 
Post the qm config VMID of your VM. Also, are you sure you made the same config as the VM was previously?
Not that it was using UEFI, but now seabios, ...

Else, I hope you made frequent backups which you should be able to restore now.
I'm sorry for you, but plainly re-installation in the face of errors isn't a good idea if the cause of the errors is still unknown...
 
Code:
bootdisk: scsi0
cores: 2
memory: 5024
name: Gsxr
net0: virtio=02:00:00:58:db:0b,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local:100/vm-100-disk-1.raw,size=50954144K
scsihw: virtio-scsi-pci
smbios1: uuid=8523c972-b7f5-457d-994b-f7f12baca61e
sockets: 1
vmgenid: e1d31fc5-3756-4eda-8631-726b358e972a
 
What was running on the VM before the reinstallation? Which operating system? Which applications?

Just trying to guess whether it might be a UEFI/Biosboot issue?

In any case make sure to have a backup of the files before modifying anything there.
 
Debian turnkey vm with wordpress
I tried to change the bios to uefi but I boot on a shell
And I have nothing other than the qcow2 file of 50 giga in backup.
I was putting a backup system in place when it happened
 
Debian turnkey vm with wordpress
Turnkey templates in PVE are usually containers not VMs - I just heard that they do offer qemu images as well - just want to make sure we know what we're talking about - could you show us where you downloaded the template from originally?
 
Code:
root@ns3368862:~# systemctl status -l pve-cluster
● pve-cluster.service - The Proxmox VE cluster filesystem
   Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor pres
   Active: active (running) since Sun 2020-06-07 19:52:51 UTC; 17h ago
Main PID: 16289 (pmxcfs)
    Tasks: 7 (limit: 4915)
   Memory: 63.3M
   CGroup: /system.slice/pve-cluster.service
           └─16289 /usr/bin/pmxcfs

Jun 07 19:52:50 ns3368862 systemd[1]: Starting The Proxmox VE cluster filesystem
Jun 07 19:52:51 ns3368862 systemd[1]: Started The Proxmox VE cluster filesystem.
lines 1-11/11 (END)...skipping...
● pve-cluster.service - The Proxmox VE cluster filesystem
   Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2020-06-07 19:52:51 UTC; 17h ago
Main PID: 16289 (pmxcfs)
    Tasks: 7 (limit: 4915)
   Memory: 63.3M
   CGroup: /system.slice/pve-cluster.service
           └─16289 /usr/bin/pmxcfs

Jun 07 19:52:50 ns3368862 systemd[1]: Starting The Proxmox VE cluster filesystem...
Jun 07 19:52:51 ns3368862 systemd[1]: Started The Proxmox VE cluster filesystem.

Code:
root@ns3368862:~# ls -lahtr /var/lib/pve-cluster/
total 4.0M
drwxr-xr-x 38 root root 4.0K Jun  7 19:52 ..
-rw-------  1 root root    0 Jun  7 19:52 .pmxcfs.lockfile
drwxr-xr-x  2 root root 4.0K Jun  7 19:52 .
-rw-------  1 root root  28K Jun  8 13:44 config.db
-rw-------  1 root root  32K Jun  8 13:47 config.db-shm
-rw-------  1 root root 4.0M Jun  8 13:48 config.db-wal

Code:
root@ns3368862:~# find /etc/pve
/etc/pve
/etc/pve/.debug
/etc/pve/local
/etc/pve/.version
/etc/pve/.rrd
/etc/pve/.vmlist
/etc/pve/openvz
/etc/pve/lxc
/etc/pve/.clusterlog
/etc/pve/qemu-server
/etc/pve/.members
/etc/pve/ha
/etc/pve/pve-www.key
/etc/pve/priv
/etc/pve/priv/authorized_keys
/etc/pve/priv/lock
/etc/pve/priv/authkey.key
/etc/pve/priv/acme
/etc/pve/priv/known_hosts
/etc/pve/priv/pve-root-ca.key
/etc/pve/priv/pve-root-ca.srl
/etc/pve/authkey.pub
/etc/pve/sdn
/etc/pve/storage.cfg
/etc/pve/pve-root-ca.pem
/etc/pve/vzdump.cron
/etc/pve/nodes
/etc/pve/nodes/ns3368862
/etc/pve/nodes/ns3368862/priv
/etc/pve/nodes/ns3368862/openvz
/etc/pve/nodes/ns3368862/qemu-server
/etc/pve/nodes/ns3368862/qemu-server/100.conf
/etc/pve/nodes/ns3368862/qemu-server/101.conf
/etc/pve/nodes/ns3368862/pve-ssl.pem
/etc/pve/nodes/ns3368862/lrm_status
/etc/pve/nodes/ns3368862/pve-ssl.key
/etc/pve/nodes/ns3368862/lxc
/etc/pve/virtual-guest

No it's not a tpl. I download an iso on turnkey, i upload it into the storage and i create a vm with this iso.
https://www.turnkeylinux.org/wordpress
 
Hmm - 2 things you could still try:
* change the BIOS from Seabios (Default) to OVMF (Uefi) - if the machine was created as UEFI machine that could get it booted again.
* closely watch the console of the VM during the bootup for potential hints where booting fails.

if this does not help you can still try to boot a linux liveCD (grml, archiso, a debian rescue cd) in the vm and then examine the contents of the scsi disk within the live-system

Before those steps make sure to have an unchanged backup of the VM-image.

I hope this helps!