Can't start a qcow2 imported disk in my VM

EliosMazer

New Member
Jan 17, 2024
6
0
1
Hello!

I've exported my VM using the following command:
Code:
qemu-img convert -O /mnt/pve/SSD-TEGRA/images/108/vm-108-disk-0.qcow2 myvm108.qcow2
I've imported it as a disk of a new VM (107) using the following command:
Code:
qm importdisk 107 /mnt/pve/HDD-ENIAC/backups/myvm108.qcow2 local-lvm
Now, I've attached the disk which I then activated and put in order first boot order of the VM, but I have the message "No bootable device" on the VM bios.

Here is the VM config file:

agent: 1 boot: order=virtio0 cores: 2 cpu: x86-64-v2-AES ide2: none,media=cdrom memory: 2048 meta: creation-qemu=8.1.2,ctime=1705504835 name: Discord-Bot net0: virtio=BC:24:11:D4:C5:CF,bridge=vmbr0,firewall=1 numa: 0 ostype: l26 scsihw: virtio-scsi-single smbios1: uuid=b229d401-fa9a-47c8-b2de-e18c403d1a65 sockets: 1 virtio0: local-lvm:vm-107-disk-0,size=16388M vmgenid: c9d44839-41d8-42e1-ab81-d99a6a08fe2a

Anyone can help me with this please?
 
Hi,
please share the output of the following:
Code:
qm config 108
pveversion -v
fdisk -l /dev/mapper/pve-vm--107--disk--0
wipefs /dev/mapper/pve-vm--107--disk--0
Note that wipefs will just list filesystem/partition labels, but not actually wipe if not using any additional flags.

EDIT: remove wrong commands for qcow2 disk, correct ones are below.
 
Last edited:
Here are the command results:

qm config 108:

Code:
agent: 1
boot: order=sata0
cores: 8
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=8.1.2,ctime=1705577505
name: Discord-Bot
net0: virtio=BC:24:11:6C:63:95,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
sata0: local-lvm:vm-108-disk-0,size=16388M
scsihw: virtio-scsi-single
smbios1: uuid=3c7bb3cb-1d85-44f1-ae9d-6b1af84aeddc
sockets: 1
unused1: HDD-ENIAC:108/vm-108-disk-0.qcow2
vmgenid: a7c5b71c-9f6d-454e-bfc6-6919b303ee2e

pveversion -v:

Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.11-7-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.4
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.5
pve-qemu-kvm: 8.1.2-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1

fdisk -l /mnt/pve/HDD-ENIAC/backups/discordbot.qcow2:

Code:
Disk /mnt/pve/HDD-ENIAC/backups/myvm108.qcow2: 7.46 GiB, 8009744384 bytes, 15644032 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

fdisk -l /mnt/pve/SSD-TEGRA/images/108/vm-108-disk-0.qcow2:

Can't (I don't have access to the host anymore)

fdisk -l /dev/mapper/pve-vm--108--disk--0:

Code:
Disk /dev/mapper/pve-vm--108--disk--0: 16 GiB, 17184063488 bytes, 33562624 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes

wipefs /mnt/pve/HDD-ENIAC/backups/myvm108.qcow2:

No output

wipefs /mnt/pve/SSD-TEGRA/images/108/vm-108-disk-0.qcow2:

Can't (I don't have access to the host anymore)

wipefs /dev/mapper/pve-vm--108--disk--0:

No output
 
Last edited:
>virtio0: local-lvm:vm-107-disk-0,size=16388M

Attach disk as ide, and boot . You have no virtio drivers inside vm-107-disk-0 afaik
Hi, I've already attached the disk and mark as bootable in Options > Boot order
sata_attached.PNG

EDIT: I've created a new VM (108) instead of 107, that why the ID is different from my first post
 
Last edited:
So i don't understand whats going on here :)

>virtio0: local-lvm:vm-107-disk-0,size=16388M -- this is old working config ? Or not working ?
Maybe root disk uuid changed, so it won`t boot
It's a VM (107) on the new host that poses the same problem, but in order to perform the commands previously requested by Fiona, I recreated a VM (108) from scratch on this same host.
 
fdisk -l /dev/mapper/pve-vm--108--disk--0:

Code:
Disk /dev/mapper/pve-vm--108--disk--0: 16 GiB, 17184063488 bytes, 33562624 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
wipefs /dev/mapper/pve-vm--108--disk--0:

No output
These commands should show you any partitions and filesystem labels on the disk. But it seems there are none. What filesystems partitions do you expect on the disk? You could still try a tool like https://www.cgsecurity.org/wiki/TestDisk and see if it can detect anything.

EDIT: oh sorry. For the qcow2 disk, you'll need different commands then the above, because fdisk and wipefs are not aware of that format. I'll see if I can find them.
 
Last edited:
To check the qcow2 disk, you can use:
Code:
modprobe nbd max_part=63
qemu-nbd -n -r -c /dev/nbd0 /mnt/pve/HDD-ENIAC/backups/discordbot.qcow2
wipefs /dev/nbd0
fdisk -l /dev/nbd0
qemu-nbd -d /dev/nbd0
 
Last edited:
To check the qcow2 disk, you can use:
Code:
modprobe nbd max_part=63
qemu-nbd -n -r -c /dev/nbd0 /mnt/pve/HDD-ENIAC/backups/discordbot.qcow2
wipefs /dev/nbd0
fdisk -l /dev/nbd0
qemu-nbd -d /dev/nbd0
Here are the results:
root@hyperviseur-001:~# modprobe nbd max_part=63
root@hyperviseur-001:~# qemu-nbd -n -r -c /dev/nbd0 /mnt/pve/HDD-ENIAC/backups/discordbot.qcow2
root@hyperviseur-001:~# wipefs /dev/nbd0
root@hyperviseur-001:~# fdisk -l /dev/nbd0
Code:
Disk /dev/nbd0: 16 GiB, 17182752768 bytes, 33560064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@hyperviseur-001:~# qemu-nbd -d /dev/nbd0
Code:
/dev/nbd0 disconnected
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!