clone VM fail to boot

L.O.S

Member
Apr 7, 2021
29
1
8
43
I am having the following issue.

I have a working vm, it boots, and can run ok. whenever I try to clone it (or any other for that mater) the resulting vm can't boot, with a no bootable drive error.

any ideas ?

Thanks
 
What is the VM configuration file of the original VM? What is the VM configuration file of the clone? Did you clone via the Proxmox web GUI or command line (qm clone)? What is the exact error message? Anything in journalctl about the cloning or failed start of the VM?
 
I am doing all from the gui.

the message I get is:

SeaBIOS (version rel-1.16.2-0-gea1b7a073390-prebuilt.gemu.org)
achine UUID 50746860-2e53-43f6-a8c2-9e1c92e1b747
iPXE (http://ipxe.org) 00:12.0 0000 PCI2.10 PnP PMM+7EFD0D20+7EF30D20 C000
Press ESC tor boot menu.
Booting from Hard Disk...
Boot failed: not a bootable disk
No bootable device. Retrying in 1 seconds.

I press escape for the following menu

Select boot device:
1. virtio-scsi Drive QEMU QEMU HARDDISK 2.5+
2. Legacy option rom
3. DVD/CD [ata1-0: QEMU DVD-ROM ATAPI-4 DVD/CD]
4. iPXE (PCI 00:12.0)

pressing 1 gets me back to where I started.


and here is the vm config

Memory: 2.00 GiB
Processors: 2(1 sockets, 2 cores)
BIOS: Default (SeaBIOS)
Display: Serial terminal 0 (serialo)
Machine: Default (i440fx)
SCSI Controller: Virtio SCSI
Cloudinit Drive (ide2): FileZilla:vm-4051-cloudinit,media=cdrom,size=4M
Hard Disk (scsiO): FileZilla:vm-4051-disk-0,size=3584M
Network Device (netO): virtio=BC:24:11:76:D0:74,bridge=vmbr1
Serial Port (serialO): socket
 
I am doing all from the gui.
Please use the command line (SSH or Proxmox host console) to provide the requested information. Without concrete information, it's near impossible to answer your question.
What is the output of qm config followed by the VM-number of the original VM? What is the output of qm config followed by the VM-number of the cloned VM (like qm config 100 if the VM-number is 100)? What is the exact error message from the Task Log? What is in journalctl (use the arrow keys to scoll) around the time of cloning the VM and around the time of the failed start of the cloned VM.
 
here are the qm config

Code:
root@saori:~# qm config 9000
boot: c
bootdisk: scsi0
cipassword: **********
ciuser:
cores: 2
ide2: FileZilla2:vm-9000-cloudinit,media=cdrom
memory: 2048
meta: creation-qemu=8.1.2,ctime=1701100366
name: ubuntu-2310
nameserver: 10.10.10.10
net0: virtio=BC:24:11:23:1C:6D,bridge=vmbr0
scsi0: FileZilla2:vm-9000-disk-0,size=3584M
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=66b89953-2d13-4d38-bc58-e497617210faC
vga: serial0
vmgenid: 61f57439-5570-4794-ba0f-7d32b508277a


root@saori:~# qm config 4051
agent: 1,fstrim_cloned_disks=1
boot: c
bootdisk: scsi0
cipassword: **********
ciuser:
cores: 2
ide2: FileZilla:vm-4051-cloudinit,media=cdrom,size=4M
ipconfig0: ip=10.40.0.50/24,gw=10.40.0.10
memory: 2048
meta: creation-qemu=8.1.2,ctime=1701100366
name: Saga
nameserver: 10.10.10.10
net0: virtio=BC:24:11:76:D0:74,bridge=vmbr1
scsi0: FileZilla:vm-4051-disk-0,size=3584M
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=50746860-2e53-43f6-a8c2-9e1c92e1b747
vga: serial0
vmgenid: 4e474859-e857-4b1c-8558-25a2346851a0
 
and the journalctl

Code:
Jan 13 17:02:34 saori smartd[2673]: Device: /dev/sdb [SAT], SMART Usage Attribute: 190 Drive_Temperature changed from 74 to 75
Jan 13 17:02:34 saori smartd[2673]: Device: /dev/sde [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 70 to 69
Jan 13 17:07:31 saori pvedaemon[849167]: <root@pam> successful auth for user 'root@pam'
Jan 13 17:07:39 saori pvedaemon[849167]: <root@pam> starting task UPID:saori:003BA395:05215732:65A2B54B:qmstart:4050:root@pam:
Jan 13 17:07:39 saori pvedaemon[3908501]: start VM 4050: UPID:saori:003BA395:05215732:65A2B54B:qmstart:4050:root@pam:
Jan 13 17:07:39 saori systemd[1]: Started 4050.scope.
Jan 13 17:07:40 saori kernel: tap4050i0: entered promiscuous mode
Jan 13 17:07:40 saori kernel: vmbr1: port 4(tap4050i0) entered blocking state
Jan 13 17:07:40 saori kernel: vmbr1: port 4(tap4050i0) entered disabled state
Jan 13 17:07:40 saori kernel: tap4050i0: entered allmulticast mode
Jan 13 17:07:40 saori kernel: vmbr1: port 4(tap4050i0) entered blocking state
Jan 13 17:07:40 saori kernel: vmbr1: port 4(tap4050i0) entered forwarding state
Jan 13 17:07:40 saori pvedaemon[849167]: <root@pam> end task UPID:saori:003BA395:05215732:65A2B54B:qmstart:4050:root@pam: OK
Jan 13 17:08:00 saori pvedaemon[3908647]: starting vnc proxy UPID:saori:003BA427:05215F9F:65A2B560:vncproxy:4050:root@pam:
Jan 13 17:08:00 saori pvedaemon[849168]: <root@pam> starting task UPID:saori:003BA427:05215F9F:65A2B560:vncproxy:4050:root@pam:
Jan 13 17:08:28 saori pvedaemon[849168]: <root@pam> starting task UPID:saori:003BA482:05216A8F:65A2B57C:qmshutdown:4050:root@pam:
Jan 13 17:08:28 saori pvedaemon[3908738]: shutdown VM 4050: UPID:saori:003BA482:05216A8F:65A2B57C:qmshutdown:4050:root@pam:
Jan 13 17:09:28 saori pvedaemon[3908738]: VM quit/powerdown failed - got timeout
Jan 13 17:09:28 saori pvedaemon[849168]: <root@pam> end task UPID:saori:003BA482:05216A8F:65A2B57C:qmshutdown:4050:root@pam: VM quit/pow>
Jan 13 17:11:38 saori pvedaemon[3909557]: stop VM 4050: UPID:saori:003BA7B5:0521B4C1:65A2B63A:qmstop:4050:root@pam:
Jan 13 17:11:38 saori pvedaemon[849168]: <root@pam> starting task UPID:saori:003BA7B5:0521B4C1:65A2B63A:qmstop:4050:root@pam:
Jan 13 17:11:38 saori kernel: tap4050i0: left allmulticast mode
Jan 13 17:11:38 saori kernel: vmbr1: port 4(tap4050i0) entered disabled state
Jan 13 17:11:38 saori qmeventd[2674]: read: Connection reset by peer
Jan 13 17:11:38 saori pvedaemon[849167]: VM 4050 qmp command failed - VM 4050 not running
Jan 13 17:11:39 saori systemd[1]: 4050.scope: Deactivated successfully.
Jan 13 17:11:39 saori systemd[1]: 4050.scope: Consumed 13.984s CPU time.
Jan 13 17:11:39 saori pvedaemon[849168]: <root@pam> end task UPID:saori:003BA427:05215F9F:65A2B560:vncproxy:4050:root@pam: OK
Jan 13 17:11:39 saori pvedaemon[849168]: <root@pam> end task UPID:saori:003BA7B5:0521B4C1:65A2B63A:qmstop:4050:root@pam: OK
Jan 13 17:11:39 saori qmeventd[3909572]: Starting cleanup for 4050
Jan 13 17:11:39 saori qmeventd[3909572]: Finished cleanup for 4050
Jan 13 17:15:40 saori pvedaemon[849167]: <root@pam> starting task UPID:saori:003BACA5:05221359:65A2B72C:qmrestore:4050:root@pam:
Jan 13 17:15:46 saori pvedaemon[849167]: <root@pam> end task UPID:saori:003BACA5:05221359:65A2B72C:qmrestore:4050:root@pam: OK
Jan 13 17:16:15 saori pvedaemon[849168]: <root@pam> starting task UPID:saori:003BAD85:052220EF:65A2B74F:qmstart:4050:root@pam:
Jan 13 17:16:15 saori pvedaemon[3911045]: start VM 4050: UPID:saori:003BAD85:052220EF:65A2B74F:qmstart:4050:root@pam:
Jan 13 17:16:15 saori systemd[1]: Started 4050.scope.
Jan 13 17:16:16 saori kernel: tap4050i0: entered promiscuous mode
Jan 13 17:16:16 saori kernel: vmbr1: port 4(tap4050i0) entered blocking state
Jan 13 17:16:16 saori kernel: vmbr1: port 4(tap4050i0) entered disabled state
Jan 13 17:16:16 saori kernel: tap4050i0: entered allmulticast mode
Jan 13 17:16:16 saori kernel: vmbr1: port 4(tap4050i0) entered blocking state
Jan 13 17:16:16 saori kernel: vmbr1: port 4(tap4050i0) entered forwarding state
Jan 13 17:16:16 saori pvedaemon[849168]: <root@pam> end task UPID:saori:003BAD85:052220EF:65A2B74F:qmstart:4050:root@pam: OK
Jan 13 17:16:21 saori pvedaemon[3911123]: starting vnc proxy UPID:saori:003BADD3:0522234F:65A2B755:vncproxy:4050:root@pam:
Jan 13 17:16:21 saori pvedaemon[849168]: <root@pam> starting task UPID:saori:003BADD3:0522234F:65A2B755:vncproxy:4050:root@pam:
Jan 13 17:16:50 saori pvedaemon[849167]: <root@pam> starting task UPID:saori:003BAE58:05222EB9:65A2B772:qmshutdown:4050:root@pam:
Jan 13 17:16:50 saori pvedaemon[3911256]: shutdown VM 4050: UPID:saori:003BAE58:05222EB9:65A2B772:qmshutdown:4050:root@pam:
Jan 13 17:16:56 saori pvedaemon[3911265]: stop VM 4050: UPID:saori:003BAE61:05223108:65A2B778:qmstop:4050:root@pam:
Jan 13 17:16:56 saori pvedaemon[849166]: <root@pam> starting task UPID:saori:003BAE61:05223108:65A2B778:qmstop:4050:root@pam:
Jan 13 17:17:01 saori CRON[3911298]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jan 13 17:17:01 saori CRON[3911299]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Jan 13 17:17:01 saori CRON[3911298]: pam_unix(cron:session): session closed for user root
Jan 13 17:17:06 saori pvedaemon[3911265]: can't lock file '/var/lock/qemu-server/lock-4050.conf' - got timeout
Jan 13 17:17:06 saori pvedaemon[849166]: <root@pam> end task UPID:saori:003BAE61:05223108:65A2B778:qmstop:4050:root@pam: can't lock file>
Jan 13 17:17:09 saori pvedaemon[3911347]: starting vnc proxy UPID:saori:003BAEB3:052235F9:65A2B785:vncproxy:4050:root@pam:
Jan 13 17:17:09 saori pvedaemon[849166]: <root@pam> starting task UPID:saori:003BAEB3:052235F9:65A2B785:vncproxy:4050:root@pam:
Jan 13 17:17:14 saori pvedaemon[849168]: <root@pam> end task UPID:saori:003BADD3:0522234F:65A2B755:vncproxy:4050:root@pam: OK
Jan 13 17:17:22 saori pvedaemon[3911394]: stop VM 4050: UPID:saori:003BAEE2:05223B3A:65A2B792:qmstop:4050:root@pam:
Jan 13 17:17:22 saori pvedaemon[849167]: <root@pam> starting task UPID:saori:003BAEE2:05223B3A:65A2B792:qmstop:4050:root@pam:
Jan 13 17:17:32 saori pvedaemon[3911394]: can't lock file '/var/lock/qemu-server/lock-4050.conf' - got timeout
Jan 13 17:17:32 saori pvedaemon[849167]: <root@pam> end task UPID:saori:003BAEE2:05223B3A:65A2B792:qmstop:4050:root@pam: can't lock file>
Jan 13 17:17:50 saori pvedaemon[3911256]: VM quit/powerdown failed - got timeout
Jan 13 17:17:50 saori pvedaemon[849167]: <root@pam> end task UPID:saori:003BAE58:05222EB9:65A2B772:qmshutdown:4050:root@pam: VM quit/pow>
Jan 13 17:17:52 saori pvedaemon[849166]: <root@pam> starting task UPID:saori:003BAF79:052246A3:65A2B7B0:vncshell::root@pam:
Jan 13 17:17:52 saori pvedaemon[3911545]: starting termproxy UPID:saori:003BAF79:052246A3:65A2B7B0:vncshell::root@pam:
Jan 13 17:17:52 saori pvedaemon[849167]: <root@pam> successful auth for user 'root@pam'
Jan 13 17:17:52 saori login[3911548]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jan 13 17:17:52 saori systemd[1]: Created slice user-0.slice - User Slice of UID 0.
Jan 13 17:17:52 saori systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0...
Jan 13 17:17:52 saori systemd-logind[2683]: New session 279 of user root.
Jan 13 17:17:52 saori systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0.
Jan 13 17:17:52 saori systemd[1]: Starting user@0.service - User Manager for UID 0...
Jan 13 17:17:52 saori (systemd)[3911554]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Jan 13 17:17:52 saori systemd[3911554]: Queued start job for default target default.target.
Jan 13 17:17:52 saori systemd[3911554]: Created slice app.slice - User Application Slice.
Jan 13 17:17:52 saori systemd[3911554]: Reached target paths.target - Paths.
Jan 13 17:17:52 saori systemd[3911554]: Reached target timers.target - Timers.
Jan 13 17:17:52 saori systemd[3911554]: Starting dbus.socket - D-Bus User Message Bus Socket...
Jan 13 17:17:52 saori systemd[3911554]: Listening on dirmngr.socket - GnuPG network certificate management daemon.
Jan 13 17:17:52 saori systemd[3911554]: Listening on gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (access f>
Jan 13 17:17:52 saori systemd[3911554]: Listening on gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted>
Jan 13 17:17:52 saori systemd[3911554]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
Jan 13 17:17:52 saori systemd[3911554]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Jan 13 17:17:52 saori systemd[3911554]: Reached target sockets.target - Sockets.
Jan 13 17:17:52 saori systemd[3911554]: Reached target basic.target - Basic System.
Jan 13 17:17:52 saori systemd[3911554]: Reached target default.target - Main User Target.
Jan 13 17:17:52 saori systemd[3911554]: Startup finished in 346ms.
Jan 13 17:17:52 saori systemd[1]: Started user@0.service - User Manager for UID 0.
Jan 13 17:17:52 saori systemd[1]: Started session-279.scope - Session 279 of User root.
Jan 13 17:17:52 saori login[3911571]: ROOT LOGIN  on '/dev/pts/0'
Jan 13 17:18:00 saori pvedaemon[849166]: <root@pam> end task UPID:saori:003BAEB3:052235F9:65A2B785:vncproxy:4050:root@pam: OK
Jan 13 17:18:18 saori systemd[1]: session-279.scope: Deactivated successfully.
Jan 13 17:18:18 saori systemd[1]: session-279.scope: Consumed 3.591s CPU time.
Jan 13 17:18:18 saori systemd-logind[2683]: Session 279 logged out. Waiting for processes to exit.
Jan 13 17:18:18 saori systemd-logind[2683]: Removed session 279.
Jan 13 17:18:18 saori pvedaemon[849166]: <root@pam> end task UPID:saori:003BAF79:052246A3:65A2B7B0:vncshell::root@pam: OK
Jan 13 17:18:18 saori pvedaemon[849167]: <root@pam> starting task UPID:saori:003BB254:0522511B:65A2B7CA:vncproxy:4050:root@pam:
Jan 13 17:18:18 saori pvedaemon[3912276]: starting vnc proxy UPID:saori:003BB254:0522511B:65A2B7CA:vncproxy:4050:root@pam:
Jan 13 17:18:23 saori pvedaemon[849166]: <root@pam> starting task UPID:saori:003BB278:052252C1:65A2B7CF:qmstop:4050:root@pam:
Jan 13 17:18:23 saori pvedaemon[3912312]: stop VM 4050: UPID:saori:003BB278:052252C1:65A2B7CF:qmstop:4050:root@pam:
Jan 13 17:18:23 saori kernel: tap4050i0: left allmulticast mode
Jan 13 17:18:23 saori kernel: vmbr1: port 4(tap4050i0) entered disabled state
Jan 13 17:18:23 saori qmeventd[2674]: read: Connection reset by peer
Jan 13 17:18:23 saori pvedaemon[849167]: VM 4050 qmp command failed - VM 4050 not running
Jan 13 17:18:23 saori systemd[1]: 4050.scope: Deactivated successfully.
Jan 13 17:18:23 saori systemd[1]: 4050.scope: Consumed 7.617s CPU time.
Jan 13 17:18:23 saori pvedaemon[849167]: <root@pam> end task UPID:saori:003BB254:0522511B:65A2B7CA:vncproxy:4050:root@pam: OK
Jan 13 17:18:24 saori pvedaemon[849166]: <root@pam> end task UPID:saori:003BB278:052252C1:65A2B7CF:qmstop:4050:root@pam: OK
Jan 13 17:18:24 saori qmeventd[3912326]: Starting cleanup for 4050
 
it iis woth noting that althought the cloning process does not show any errors, the disk of the cloned VM is not the same size as the orginal VM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!