newly installed containers won't start

network is still misconfigured.

i would suggest you to use bond-mode active-backup (with the other modes your switch needs to work together)

and set your configuration similar to this:

Code:
auto lo
iface lo inet loopback

iface enp1s0f0 inet manual

iface enp1s0f1 inet manual

auto bond0
iface bond0 inet manual
      slaves enp1s0f0 enp1s0f1
      bond-miimon 100
      bond-mode active-backup
      bond-primary enp1s0f0

auto vmbr0
iface vmbr0 inet static
        address 172.16.0.31/23
        gateway 172.16.0.1       
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
 
ok I see one problem in my network already. I'm working to fix that will update as soon as I have that resolved.
 
Ok I've tried both 802.3ad as the LACP link agg protocol and just active-backup. It seems that the Containers will start once then they won't start again. What kind of information do I need to get this resolved?
 
container debug logs and your current network config is a good start
 
This is all now after I've started the container once logged in ran some commands and then shut it down. Now it won't start. I can get it to start again if I reboot the Host machine.

Debug
Code:
root@kvm2:~# cat /tmp/lxc.log
lxc-start 103 20201116153725.272 INFO     confile - confile.c:set_config_idmaps:2051 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 103 20201116153725.272 INFO     confile - confile.c:set_config_idmaps:2051 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 103 20201116153725.272 INFO     lsm - lsm/lsm.c:lsm_init:29 - LSM security driver AppArmor
lxc-start 103 20201116153725.272 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "103", config section "lxc"
lxc-start 103 20201116153810.485 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 103 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: can't read superblock on /dev/loop0.

lxc-start 103 20201116153810.258 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 103 lxc pre-start produced output: command 'mount /dev/loop0 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

lxc-start 103 20201116153810.370 ERROR    conf - conf.c:run_buffer:323 - Script exited with status 255
lxc-start 103 20201116153810.371 ERROR    start - start.c:lxc_init:804 - Failed to run lxc.hook.pre-start for container "103"
lxc-start 103 20201116153810.371 ERROR    start - start.c:__lxc_start:1903 - Failed to initialize container "103"
lxc-start 103 20201116153810.372 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "103", config section "lxc"
lxc-start 103 20201116153810.539 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "103", config section "lxc"
lxc-start 103 20201116153811.267 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 103 lxc post-stop produced output: umount: /var/lib/lxc/103/rootfs: not mounted

lxc-start 103 20201116153811.267 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 103 lxc post-stop produced output: command 'umount --recursive -- /var/lib/lxc/103/rootfs' failed: exit code 1

lxc-start 103 20201116153811.278 ERROR    conf - conf.c:run_buffer:323 - Script exited with status 1
lxc-start 103 20201116153811.278 ERROR    start - start.c:lxc_end:971 - Failed to run lxc.hook.post-stop for container "103"
lxc-start 103 20201116153811.278 ERROR    lxc_start - tools/lxc_start.c:main:308 - The container failed to start
lxc-start 103 20201116153811.278 ERROR    lxc_start - tools/lxc_start.c:main:314 - Additional information can be obtained by setting the --logfile and --logpriority options
root@kvm2:~#

Network ip a:
Code:
root@kvm2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:70 brd ff:ff:ff:ff:ff:ff
3: enp1s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:70 brd ff:ff:ff:ff:ff:ff
4: ens6f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 70:ca:9b:03:ad:b2 brd ff:ff:ff:ff:ff:ff
5: ens6f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 70:ca:9b:03:ad:b4 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:70 brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:70 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.31/23 brd 172.16.1.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::32f7:dff:fe99:c270/64 scope link
       valid_lft forever preferred_lft forever
root@kvm2:~#

ip r:
Code:
root@kvm2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:70 brd ff:ff:ff:ff:ff:ff
3: enp1s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:70 brd ff:ff:ff:ff:ff:ff
4: ens6f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 70:ca:9b:03:ad:b2 brd ff:ff:ff:ff:ff:ff
5: ens6f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 70:ca:9b:03:ad:b4 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:70 brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:70 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.31/23 brd 172.16.1.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::32f7:dff:fe99:c270/64 scope link
       valid_lft forever preferred_lft forever
root@kvm2:~#

/etc/network/interfaces
Code:
root@kvm2:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp1s0f0 inet manual

iface enp1s0f1 inet manual

iface ens6f0 inet manual

iface ens6f1 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves enp1s0f0 enp1s0f1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address 172.16.0.31/23
        gateway 172.16.0.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0

root@kvm2:~#
 
* does it still work normally on local storage? (if yes - you need to debug your networking setup still)

* does using active-backup make any difference?

* what do you see if you run: pvesm scan cifs <CIFS_IP>. do the shares come up normally?

* please also check your cifs storage devices for any disk errors (on the synology themselves)
 
I gave up trying to figure that out. No matter what my network settings are if I reboot the Host I can start the Container one time. So seems to me its not the network. But I've tried to add additional local drives to the HOST to make a bigger filesystem for containers. The issue is that I can't seem to add storage for some reason. It just keeps saying the only volume group there is is the pve original one. Not sure whats going on.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!