[SOLVED] No connection to the network from LXC

Ind3x

New Member
May 25, 2020
7
1
3
30
Hey!

I'm currently trying to set up a LXC with Ubuntu on Proxmox on Windows 10 (via Hyper-V). I'm having the issue, that the LXC seemingly can't connect to the netowork.

My setup is the following:
Windows 10 with Hyper-V (IP: 192.168.178.21)
Proxmox is running on that Hyper-V (connected to the network via a Hyper-V virtual switch which is set to "External Network" and got the IP: 192.168.178.20)
Ubuntu is running as LXC on that Proxmox (IP: 192.168.178.50)

- I can ping my router from the Proxmox VM (.20)
- I can not ping my router from the Ubuntu LXC (.50)
- I can ping the Proxmox VM from other devices on the network
- I can not ping the Ubuntu LXC from other devices on the network
- I can ping the Hyper-V host (.21) from the Proxmox VM
- I can not ping the Hyper-V host (.21) from the Ubutnu LXC

The /etc/network/interfaces of the Proxmox VM contains:
Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.178.20
        netmask 255.255.255.0
        gateway 192.168.178.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

The Ubuntu LXC doesn't have a /etc/network/interfaces file? Is that needed and maybe the issue?

Network view (form Proxmox GUI) of the LXC:
1590410604949.png

Network view (from Proxmox GUI) of the host itself:
1590410625177.png

ip a from the LXC returns:
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ca:69:cd:6d:df:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.178.50/24 brd 192.168.178.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::c869:cdff:fe6d:dfc4/64 scope link
       valid_lft forever preferred_lft forever

ip r from the LXC returns:
Code:
default via 192.168.178.1 dev eth0 proto static
192.168.178.0/24 dev eth0 proto kernel scope link src 192.168.178.50

Sadly I don't understand much about Linux networking and have no idea. In tutorials I watched, it seemed to be working out-of-the-box.

Sincerely
Ind3x
 
Last edited:
hi,

just a guess but can you try setting IPv6 to static and leaving it empty?
 
which container template are you using?

also please post the output of pveversion -v and pct config CTID
 
which container template are you using?

also please post the output of pveversion -v and pct config CTID

The template I'm using is ubuntu-20.04-standard_20.04-1_amd64.tar.gz

pveversion -v returns:
Code:
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

pct config 100 returns:
Code:
arch: amd64
cores: 2
hostname: mc1
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.178.1,hwaddr=CA:69:CD:6D:DF:C4,ip=192.168.178.50/24,type=veth
ostype: ubuntu
rootfs: local-lvm:vm-100-disk-0,size=16G
swap: 2048
unprivileged: 1
 
Hello Ind3x, did you use a bare metal installation without wifi connection?

I see:

Proxmox on Windows 10 (via Hyper-V)

I have the same problem with VirtualBox 6.1.8.
 
Hello Ind3x, did you use a bare metal installation without wifi connection?

I see:

Proxmox on Windows 10 (via Hyper-V)

I have the same problem with VirtualBox 6.1.8.

Hey,

I'm running Windows 10 on the bare metal. On that Win10 is running Hyper-V. In that Hyper-V I created a VM for the Proxmox installation. Later on I want to have an actual bare metal server with Proxmox on it but I'm still waiting for some hardware.

On that machine I have both, LAN and WLAN but I'm connected via LAN.
 
The Ubuntu LXC doesn't have a /etc/network/interfaces file? Is that needed and maybe the issue?

no it's not needed, we use mostly systemd-networkd in containers.

have you tried turning off the firewall setting for the container network?
 
no it's not needed, we use mostly systemd-networkd in containers.

have you tried turning off the firewall setting for the container network?

I just tried setting "Firewall" to "No" under "Firewall > Options" for the Datacenter, the host and the LXC. Also restarted everything but without any difference, sadly.