URGENT Can't start Cloned Lxc container

yena

Renowned Member
Nov 18, 2011
379
5
83
Hello,
i have cloned a lxc container ( last snapshot of a Pve-sync backup ).

my error starting the container:
failed to get device path
lxc-start: conf.c: run_buffer: 347 Script exited with status 2
lxc-start: start.c: lxc_init: 465 failed to run pre-start hooks for container '200'.
lxc-start: start.c: __lxc_start: 1313 failed to initialize the container
lxc-start: tools/lxc_start.c: main: 344 The container failed to start.
lxc-start: tools/lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.


lxc conf:
--------------------------------------------------------------------
arch: amd64
cpulimit: 7
cpuunits: 1024
hostname: xxxxxxx
memory: 20480
net0: bridge=vmbr0,gw=xxx.36.72.1,hwaddr=36:37:35:33:30:31,ip=xxx.36.72.93/22,name=eth0,type=veth
net1: bridge=vmbr1,hwaddr=36:66:31:61:33:62,ip=192.168.14.2/24,name=eth1,type=veth
onboot: 1
ostype: debian
parent: vzdump
rootfs: RESTORED:vm-200-disk-1,size=600G
swap: 0
-----------------------------------------------------------------------

Now i can see:

/rpool/RESTORED/vm-200-disk-1

-----------------------------------------------------------------------------------------------------------------------------------
NAME USED AVAIL REFER MOUNTPOINT
rpool 1015G 783G 96K /rpool
rpool/BACKUP 537G 783G 96K /rpool/BACKUP
rpool/BACKUP/subvol-100-disk-1 537G 783G 420G /rpool/BACKUP/subvol-100-disk-1
rpool/BACKUP/subvol-100-disk-1@rep_DbMeteoDaily_2016-06-22_06:30:06 36.2G - 312G -
rpool/BACKUP/subvol-100-disk-1@rep_DbMeteoDaily_2016-06-24_06:30:03 16.7G - 314G -
rpool/BACKUP/subvol-100-disk-1@rep_DbMeteoDaily_2016-06-25_06:30:08 16.8G - 315G -
rpool/BACKUP/subvol-100-disk-1@rep_DbMeteoDaily_2016-09-06_10:46:19 384M - 420G -
rpool/BACKUP/subvol-100-disk-1@rep_DbMeteoDaily_2016-09-06_15:27:59 160K - 420G -
rpool/RESTORED 420G 783G 96K /rpool/RESTORED
rpool/RESTORED/subvol-200-disk-2 96K 600G 96K /rpool/RESTORED/subvol-200-disk-2
rpool/RESTORED/subvol-999-disk-1 343M 7.67G 343M /rpool/RESTORED/subvol-999-disk-1
rpool/RESTORED/vm-200-disk-1 420G 783G 420G /rpool/RESTORED/vm-200-disk-1
rpool/RESTORED/vm-200-disk-1@rep_DbMeteoDaily_2016-09-06_15:27:59 160K - 420G -
rpool/ROOT 1.73G 783G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.73G 783G 1.73G /
rpool/VPS 31.1G 783G 96K /rpool/VPS
rpool/VPS/subvol-101-disk-1 31.1G 90.0G 9.99G /rpool/VPS/subvol-101-disk-1
rpool/VPS/subvol-101-disk-1@rep_WebMeteoDaily_2016-09-02_01:10:02 664M - 10.0G -
rpool/VPS/subvol-101-disk-1@rep_WebMeteoDaily_2016-09-03_01:10:02 507M - 10.1G -
rpool/VPS/subvol-101-disk-1@rep_WebMeteoDaily_2016-09-04_01:10:02 6.50G - 10.3G -
rpool/VPS/subvol-101-disk-1@rep_WebMeteoDaily_2016-09-05_01:10:02 6.59G - 10.3G -
rpool/swap 24.4G 808G 59.8M -

------------------------------------------------------------------------------------------------------------

In the web pannel i see vm-200-disk-1 SIZE 0 Bite !!

And i cant start it:
lxc-start -n 200 -F -l 9
failed to get device path
lxc-start: conf.c: run_buffer: 347 Script exited with status 2
lxc-start: start.c: lxc_init: 465 failed to run pre-start hooks for container '200'.
lxc-start: start.c: __lxc_start: 1313 failed to initialize the container
lxc-start: tools/lxc_start.c: main: 344 The container failed to start.
lxc-start: tools/lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

I use last pve versione:

pveversion -v
proxmox-ve: 4.2-64 (running kernel: 4.4.16-1-pve)
pve-manager: 4.2-18 (running version: 4.2-18/158720b9)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-44
qemu-server: 4.0-86
pve-firmware: 1.1-9
libpve-common-perl: 4.0-72
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-57
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.1-2
pve-container: 1.0-73
pve-firewall: 2.0-29
pve-ha-manager: 1.0-33
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.4-1
lxcfs: 2.0.3-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
zfsutils: 0.6.5.7-pve10~bpo80
---------------------------------------------------------------
I also created a test vm on the volume restored and it is OK , i can start it.

Thanks!
 
Last edited:
Hi,

have you the add the storage RESTORED to you storage config?
If not you have to add a ZFS stroage with the name RESTORED and the path /rpool/RESTORED
 
Yes i added it, and if i create a vm from zero it work.
Only the cloned vm can't start.
I see also that i cant resize the vm volume.
 

I have done a full clone:

NAME USED AVAIL REFER MOUNTPOINT
rpool 1005G 793G 96K /rpool
rpool/BACKUP 521G 793G 96K /rpool/BACKUP
rpool/BACKUP/subvol-100-disk-1 521G 793G 420G /rpool/BACKUP/subvol-100-disk-1
rpool/BACKUP/subvol-100-disk-1@rep_DbMeteoDaily_2016-06-22_06:30:06 36.5G - 312G -
rpool/BACKUP/subvol-100-disk-1@rep_DbMeteoDaily_2016-06-25_06:30:08 36.0G - 315G -
rpool/BACKUP/subvol-100-disk-1@rep_DbMeteoDaily_2016-09-06_10:46:19 384M - 420G -
rpool/BACKUP/subvol-100-disk-1@rep_DbMeteoDaily_2016-09-06_15:27:59 304M - 420G -
rpool/BACKUP/subvol-100-disk-1@rep_DbMeteoDaily_2016-09-08_06:30:02 0 - 420G -
rpool/RESTORED 420G 793G 96K /rpool/RESTORED
rpool/RESTORED/subvol-200-disk-2 96K 600G 96K /rpool/RESTORED/subvol-200-disk-2
rpool/RESTORED/subvol-999-disk-1 343M 8.67G 343M /rpool/RESTORED/subvol-999-disk-1
rpool/RESTORED/vm-200-disk-1 420G 180G 420G /rpool/RESTORED/vm-200-disk-1
rpool/RESTORED/vm-200-disk-1@rep_DbMeteoDaily_2016-09-06_15:27:59 160K - 420G -
rpool/ROOT 1.75G 793G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.75G 793G 1.75G /
--------------------------------------------------------------------------------------------------------
i have set the disk name as vm-200-disk-1.. is this the problem ? ( i think no )
You can see in the capture screen the "Size=0" instead 420G.

 

Attachments

  • restored.png
    restored.png
    19.5 KB · Views: 2
i have set the disk name as vm-200-disk-1.. is this the problem ? ( i think no )
no is no problem.
You can see in the capture screen the "Size=0" instead 420G.
yes because you copied it, so you do not clone the refquota. You have now unlimited quota.

zfs set refquota=420G rpool/RESTORED/vm-200-disk-1

will set the size
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!