I gave up trying to figure that out. No matter what my network settings are if I reboot the Host I can start the Container one time. So seems to me its not the network. But I've tried to add additional local drives to the HOST to make a bigger filesystem for containers. The issue is that I can't...
This is all now after I've started the container once logged in ran some commands and then shut it down. Now it won't start. I can get it to start again if I reboot the Host machine.
Debug
root@kvm2:~# cat /tmp/lxc.log
lxc-start 103 20201116153725.272 INFO confile -...
Ok I've tried both 802.3ad as the LACP link agg protocol and just active-backup. It seems that the Containers will start once then they won't start again. What kind of information do I need to get this resolved?
You are correct. I don't know why I was thinking the bridge was the same as a bond. But I have configured a BOND and put my interfaces into it. But the Container still fails to start. Not sure what to look at now.
Maybe I'm missing something. But I see one bridge configured with the 172.16.0.31/23 address. and that is also what is showing int he GUI. Here is a picture of it.
The CIFS are my local synology NAS devices. They are on the same switch as my proxmox servers.
/etc/network/interfaces
auto lo
iface lo inet loopback
iface enp1s0f0 inet manual
iface enp1s0f1 inet manual
iface ens6f0 inet manual
iface ens6f1 inet manual
auto vmbr0
iface vmbr0 inet static...
I'm not sure that is the issue since I have 2 different network storage devices and both of them exibit the same issue of containers with disks on them won't start. I am still researching if fsck can be run on the remote network storage but if anyone can think of any other reason I'd greatly...
So based on the thought that it might be related to my storage. I created another container and put it on local storage and it works fine. How do I determine what needs to be done to get the containers to work on my remote storage?
when trying to move the disk this is what I got.
Logical volume "vm-103-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: 4096/2097152 done
Creating filesystem with 2097152 4k blocks and 524288 inodes
Filesystem UUID...
Here is the complete log. Not much in it like before. I can mount the raw file system and unmount it.
root@kvm2:/tmp# cat lxc.log
lxc-start 103 20201111142751.743 INFO confile - confile.c:set_config_idmaps:2051 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 103...
That was the complete log from the lxc.log I created trying to troubleshoot it. I've rebooted since then so that log is gone but here is the config.
root@kvm2:~# pct config 103
arch: amd64
cores: 1
hostname: CT-Test
memory: 512
net0...
I've just installed proxmox and tried several container images. None of them will start on either of the two of my clustered servers.
Here is the error I get.
root@kvm2:~# lxc-start -n 103 -F -l DEBUG -o /tmp/lxc.log
lxc-start: 103: conf.c: run_buffer: 323 Script exited with status 255...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.