containers loose connection and don't regain even by recovering from working backup

dinis

Member
Mar 4, 2018
19
0
6
52
Hi all.
Feeling a bit desperate.
I have a couple of debian9 containers. I set up the network- everything is fine.
Then all of a sudden, both containers lost connection and are not even capable of pinging outside themselves.
The most awkward - if I try to get these containers restored from yesterdays' and previous vzdump backups / that I am 100% sure it was working), they do continue to fail.
Any idea of how I can solve this? The following images are from one of the containers.
They have firewalld, as well as apf installed but when I stop them (as well as iptables, doesn't solve).
These machines have been migrated form debian 8 - thus the eth0 - might this be the problem?

route -n results are, as follows
Code:
Destination    Gateway     Genmask     Flags MSSWindow irtt Iface
0.0.0.0     94.23.31.254     0.0.0.0     UG     0 0    0     eth0
0.0.0.0     94.23.31.254     0.0.0.0     UG     0 0     0     eth0
94.23.31.254     0.0.0.0     255.255.255.255 UH     0 0     0     eth0


the ifconfig:
Code:
eth0: flags=4163WP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 178.33.0.67 netmask 255.255.255.255 broadcast 178.33.0.67 inet6     fe80::ff:fe2e:ce8f prefixlen 64 scopeid 0x20<link>
ether 02:00:00:2e:ce:8f txqueuelen 1000 (Ethernet) RX packets 4693 bytes 485304 (473.9 K18)
    RX errors 237 dropped 0 overruns 0 frame 237
    TX packets 16027 bytes 1329503 (1.2 MiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
    lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0
    inet6 ::1 prefixlen 128 scopeid Ox10<host> loop txqueuelen 1 (Local Loopback)
    RX packets 8339 bytes 588849 (575.0 Ki8) RX errors 0 dropped 0 overruns 0 frame 0 TX     packets 8339 bytes 588849 (575.0 Ki8)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


My network configs are:
Code:
source /etc/network/interfaces.d/*
#The loopback network interface
auto lo eth0
iface lo inet loopback
iface eth0 inet static
    address 178.33.0.67
    netmask 255.255.255.255
    broadcast 178.33.0.67
    post-up route add 94.23.31.254 dev eth0
    post-up route add default gw 94.23.31.254
    pre-down route del 94.23.31.254 dev eth0
    pre-down route del default gw 94.23.31.254
    gateway 94.23.31.254

#The primary network interface
#allow-hotplug eth0
#iface eth0 inet dhcp



Thanks in advance for all the help
 
post these:

1- lxc config files: they are at /etc/pve/lxc/

2- pve host /etc/network/interfaces

jessie to stretch upgrade can end up with changes network device names - but AFAIK that effects hardware / kvm not lxc.
 
Destination Gateway Genmask Flags MSSWindow irtt Iface 0.0.0.0 94.23.31.254 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 94.23.31.254 0.0.0.0 UG 0 0 0 eth0 94.23.31.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
two times the gateway, that can't go good. Please post your Containerconfig.
Code:
pct config <id>
 
two times the gateway, that can't go good. Please post your Containerconfig.
Code:
pct config <id>

nodes/xxx/lxc/167.conf does not exist
however, /etc/pve/nodes/xxx/qemu-server has it:


Code:
balloon: 1500
boot: cdn
bootdisk: ide0
cores: 4
ide0: local:167/vm-167-disk-1.qcow2,size=60G
ide2: none,media=cdrom
keyboard: pt
memory: 4000
name: NAME
net0: e1000=02:00:00:2e:ce:8f,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
smbios1: uuid=c3e1a8d0-a574-4d48-8dec-a08b6487f1c2
sockets: 2
 
Last edited:
post these:

1- lxc config files: they are at /etc/pve/lxc/

2- pve host /etc/network/interfaces

jessie to stretch upgrade can end up with changes network device names - but AFAIK that effects hardware / kvm not lxc.


/etc/pve/lxc/ has no files But there are config files in /etc/pve/nodes/xxx/qemu-server.
I already posted the 167.conf in the previous answer (to fireon's question)


host interfaces:

Code:
# for Routing
auto vmbr1
iface vmbr1 inet manual
    post-up /etc/pve/kvm-networking.sh
    bridge_ports dummy0
    bridge_stp off
    bridge_fd 0


# vmbr0: Bridging. Make sure to use only MAC adresses that were assigned to you.
auto vmbr0
iface vmbr0 inet static
    address 94.23.31.143
    netmask 255.255.255.0
    network 94.23.31.0
    broadcast 94.23.31.255
    gateway 94.23.31.254
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0

iface vmbr0 inet6 static
    address 2001:41d0:2:208f::
    netmask 64
    post-up /sbin/ip -f inet6 route add 2001:41d0:2:20ff:ff:ff:ff:ff dev vmbr0
    post-up /sbin/ip -f inet6 route add default via 2001:41d0:2:20ff:ff:ff:ff:ff
    pre-down /sbin/ip -f inet6 route del default via 2001:41d0:2:20ff:ff:ff:ff:ff
    pre-down /sbin/ip -f inet6 route del 2001:41d0:2:20ff:ff:ff:ff:ff dev vmbr0
 
Last edited:
dinis: wrote "/etc/pve/lxc/ has no files You refer to the host proxmox machine, right?"

container configuration are at /etc/pve/lxc/ on pve host.

are you using KVM or LXC [ containers] ?

PS: as Fireon said if 2 gateways then fix that. perhaps you are using KVM and the interfaces file has 2 gateway lines?
 
I believe I am using KVM -
dinis: wrote "/etc/pve/lxc/ has no files You refer to the host proxmox machine, right?"

container configuration are at /etc/pve/lxc/ on pve host.

are you using KVM or LXC [ containers] ?

PS: as Fireon said if 2 gateways then fix that. perhaps you are using KVM and the interfaces file has 2 gateway lines?

where can I fix the two gateways? In the /etc/network/interfaces file? OK FIXED IT - now I only have one gateway. But problem is still there.
I think I am using KVM - but I am not sure. HOw can I get that info? OK -I confirm it is KVM - checked by creating a new VM - and that is the defult option I take.
After my first answer I found the conf files in /etc/pve/nodes/chillilime/qemu-server (I edited my first answer) - does that help ?
 
Last edited:
After fixing the interfaces, I have...
Code:
Destination    Gateway     Genmask     Flags MSSWindow irtt Iface
0.0.0.0     94.23.31.254     0.0.0.0     UG     0 0     0     eth0
94.23.31.254     0.0.0.0     255.255.255.255 UH     0 0     0     eth0

Still, no network.
 
MIght there be an optional config in the VM that is bypassing the network configuration?
 
Is there a particular reason you are using subnet 255.255.255.255? Is it point to point connection? Double check the IP configuration you are using for that VM.
 
Is there a particular reason you are using subnet 255.255.255.255? Is it point to point connection? Double check the IP configuration you are using for that VM.
Hi.
I have other VM and all they work on that subnet just fine (these were instructions by the server provider - well, they were for debian 8 - not sure if they are for debian 9). Anyhow, these servers have worked a few days this config. but then snaped
 
I suggest you get the proxmox latest vestion book. there is a thread or 2 about it. get a new lxc to run. in my opinion using kvm for linux is a lot harder to get network correct then lxc. the pve interface is simple after a short while.


it is a great book. A+
 
I suggest you get the proxmox latest vestion book. there is a thread or 2 about it. get a new lxc to run. in my opinion using kvm for linux is a lot harder to get network correct then lxc. the pve interface is simple after a short while.


it is a great book. A+

Hi Rob
Thanks for your input.
I have these containers and don't want to loose their contents. I already saw that I can migrate kvm to lxc however, it is quite another uncharted area for me....
Now, without network access, my data is "inside"those kvm containers.
Are you sure I am that doomed? The network was working very well. Still works very well for debian 8 VMs.
The unique thing I remember was perhaps some update of the host machine debian. and, affter that, it advised me that the kernel had changed - but I don't remember if I updated before or after this issue.
ARen't there other config files to look at? places where I
 
another idea-might this be because the host is debian 8 while the VM's are debian 9? shouldn't be but....
I would say the problem is located in the host - because the VM backups that worked perfectly now don't work when I recover them. Any one would agree that this might be a possibility?
 
I suggest you get the proxmox latest vestion book. there is a thread or 2 about it. get a new lxc to run. in my opinion using kvm for linux is a lot harder to get network correct then lxc. the pve interface is simple after a short while.


it is a great book. A+
Again... I searched for how to convert kvm into lxc and got this https://pve.proxmox.com/wiki/Convert_OpenVZ_to_LXC. However my backup files are this kind: vzdump-qemu-167-2018_03_04-05_00_02.vma.gz (not tar files which maks this method not to work). any ideas?
 
Hi all.
Solved.
At the end it was not the netwrk configuration. Yesterday the IP was victim of a TCP_SYN and the provider blocked a couple of IPs on that range. They sent an email but it was blocked in the spam so I did not know my IP was off.
Sorry for disturbing. And thanks for all the help. Rob, I will look into lxc, anyhow.

All the best to all