all VM can not restart qcow2' does not exist

Ivan Gonzalez

Renowned Member
Jan 20, 2014
76
0
71
Miami, Florida, United States
all happen few hours ago, not really sure the reason just yet

anyone had this problem before?

Nov 22 11:40:42 data1 task UPID:data1:00000CCD:00001498:5470BC86:vzstart:101:root@pam:: command 'vzctl start 101' failed: exit code 62
Nov 22 11:40:42 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: <root@pam> starting task UPID:data1:00000D0B:0000162B:5470BC8A:vzstart:103:root@pam:
Nov 22 11:40:42 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: starting CT 103: UPID:data1:00000D0B:0000162B:5470BC8A:vzstart:103:root@pam:
Nov 22 11:40:42 data1 kernel: CT: 103: started
Nov 22 11:40:43 data1 kernel: venet0: no IPv6 routers present
Nov 22 11:40:45 data1 kernel: CT: 103: stopped
Nov 22 11:40:46 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: command 'vzctl start 103' failed: exit code 62
Nov 22 11:40:46 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: <root@pam> starting task UPID:data1:00000D44:000017C0:5470BC8E:vzstart:104:root@pam:
Nov 22 11:40:46 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: starting CT 104: UPID:data1:00000D44:000017C0:5470BC8E:vzstart:104:root@pam:
Nov 22 11:40:46 data1 kernel: CT: 104: started
Nov 22 11:40:47 data1 pvestatd[3247]: command '/usr/sbin/vzctl exec 104 /bin/cat /proc/net/dev' failed: exit code 8
Nov 22 11:40:47 data1 pvestatd[3247]: command '/usr/sbin/vzctl exec 104 /bin/cat /proc/net/dev' failed: exit code 8
Nov 22 11:40:49 data1 kernel: CT: 104: stopped
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: command 'vzctl start 104' failed: exit code 62
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: <root@pam> starting task UPID:data1:00000D84:00001952:5470BC92:qmstart:105:root@pam:
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: start VM 105: UPID:data1:00000D84:00001952:5470BC92:qmstart:105:root@pam:
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: you can't start a vm if it's a template
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: <root@pam> starting task UPID:data1:00000D85:00001955:5470BC92:vzstart:106:root@pam:
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: starting CT 106: UPID:data1:00000D85:00001955:5470BC92:vzstart:106:root@pam:
Nov 22 11:40:50 data1 kernel: CT: 106: started
Nov 22 11:40:53 data1 kernel: CT: 106: stopped
Nov 22 11:40:54 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: command 'vzctl start 106' failed: exit code 62
Nov 22 11:40:54 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: <root@pam> starting task UPID:data1:00000DBE:00001AE6:5470BC96:qmstart:109:root@pam:
Nov 22 11:40:54 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: start VM 109: UPID:data1:00000DBE:00001AE6:5470BC96:qmstart:109:root@pam:
Nov 22 11:40:56 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: volume 'local:105/base-105-disk-1.qcow2/109/vm-109-disk-1.qcow2' does not exist
Nov 22 11:40:56 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: <root@pam> starting task UPID:data1:00000DC1:00001BB0:5470BC98:vzstart:110:root@pam:
Nov 22 11:40:56 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: starting CT 110: UPID:data1:00000DC1:00001BB0:5470BC98:vzstart:110:root@pam:
Nov 22 11:40:56 data1 kernel: CT: 110: started
Nov 22 11:40:57 data1 pvestatd[3247]: command '/usr/sbin/vzctl exec 110 /bin/cat /proc/net/dev' failed: exit code 8
Nov 22 11:40:57 data1 pvestatd[3247]: command '/usr/sbin/vzctl exec 110 /bin/cat /proc/net/dev' failed: exit code 8
Nov 22 11:40:59 data1 kernel: CT: 110: stopped
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: command 'vzctl start 110' failed: exit code 62
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: <root@pam> starting task UPID:data1:00000E01:00001D43:5470BC9C:qmstart:113:root@pam:
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: start VM 113: UPID:data1:00000E01:00001D43:5470BC9C:qmstart:113:root@pam:
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: you can't start a vm if it's a template
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: <root@pam> starting task UPID:data1:00000E02:00001D45:5470BC9C:qmstart:118:root@pam:
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: start VM 118: UPID:data1:00000E02:00001D45:5470BC9C:qmstart:118:root@pam:
Nov 22 11:41:01 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam:: volume 'local:105/base-105-disk-1.qcow2/118/vm-118-disk-1.qcow2' does not exist
Nov 22 11:41:01 data1 pvesh: <root@pam> end task UPID:data1:00000CCA:00001493:5470BC86:startall::root@pam: OK
Nov 22 11:43:54 data1 pvedaemon[3242]: <root@pam> successful auth for user 'root@pam'
Nov 22 11:44:56 data1 pvedaemon[3683]: start VM 109: UPID:data1:00000E63:00007923:5470BD88:qmstart:109:root@pam:
Nov 22 11:44:56 data1 pvedaemon[3239]: <root@pam> starting task UPID:data1:00000E63:00007923:5470BD88:qmstart:109:root@pam:
Nov 22 11:44:56 data1 pvedaemon[3683]: volume 'local:105/base-105-disk-1.qcow2/109/vm-109-disk-1.qcow2' does not exist
Nov 22 11:44:56 data1 pvedaemon[3239]: <root@pam> end task UPID:data1:00000E63:00007923:5470BD88:qmstart:109:root@pam: volume 'local:105/base-105-disk-1.qcow2/109/vm-109-disk-1.qcow2' does not exist
Nov 22 11:45:56 data1 kernel: usb 3-1: USB disconnect, device number 2
Nov 22 11:49:58 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 11:54:53 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:03:41 data1 pveproxy[3252]: worker 3253 finished
Nov 22 12:03:41 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:03:41 data1 pveproxy[3252]: worker 3915 started
Nov 22 12:04:34 data1 pveproxy[3252]: worker 3254 finished
Nov 22 12:04:34 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:04:34 data1 pveproxy[3252]: worker 3927 started
Nov 22 12:04:59 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:09:00 data1 pveproxy[3252]: worker 3255 finished
Nov 22 12:09:00 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:09:00 data1 pveproxy[3252]: worker 3979 started
Nov 22 12:09:53 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:17:01 data1 /USR/SBIN/CRON[4077]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Nov 22 12:20:00 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:24:35 data1 pveproxy[3252]: worker 3915 finished
Nov 22 12:24:35 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:24:35 data1 pveproxy[3252]: worker 4172 started
Nov 22 12:24:53 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:28:05 data1 pveproxy[3252]: worker 3927 finished
Nov 22 12:28:05 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:28:05 data1 pveproxy[3252]: worker 4218 started
Nov 22 12:33:01 data1 pmxcfs[2781]: [dcdb] notice: data verification successful
Nov 22 12:33:26 data1 pvedaemon[3223]: worker 3242 finished
Nov 22 12:33:26 data1 pvedaemon[3223]: starting 1 worker(s)
Nov 22 12:33:26 data1 pvedaemon[3223]: worker 4295 started
Nov 22 12:35:00 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:38:06 data1 pveproxy[3252]: worker 3979 finished
Nov 22 12:38:06 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:38:06 data1 pveproxy[3252]: worker 4354 started
Nov 22 12:39:54 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:40:04 data1 rrdcached[2767]: flushing old values
Nov 22 12:40:04 data1 rrdcached[2767]: rotating journals
Nov 22 12:40:04 data1 rrdcached[2767]: started new journal /var/lib/rrdcached/journal/rrd.journal.1416678004.148776
Nov 22 12:50:01 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:54:55 data1 pmxcfs[2781]: [status] notice: received log


root@data1:/var/lib/vz/dump# fsckfsck from util-linux 2.20.1
e2fsck 1.42.5 (29-Jul-2012)
/dev/mapper/pve-root is mounted.
e2fsck: Cannot continue, aborting.
 
Last edited:
Looks like pve cannot locate your qcow2 disks and CTs.
Maybe problem with the underlying storage?
Did you tried to reboot? Try to fsck only un-mounted volumes otherwise you may corrupt the data (e.g from livecd).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!