Proxmox 4 Upgrade LXC only IPv6

ZeroPoke

Renowned Member
Oct 16, 2015
4
0
66
I recently upgraded to Proxmox 4 from 3.4

After doing so and convert my OpenVZ to LXC and updating all the network configs.
My containers only have IPv6 address when I view ip addr list

[I had tried to put the ip addr list output here but apparently something about links or videos or images *shrugs*]

I believe this is why my containers can only talk to each other and the host but not the rest of the network because it isnt IPv6 at this time.

Ive run out of googlefu and any help would be appreciated. Sorry there isnt much info I dont know what might be needed for this.
 
Paste the config file of one such container (pct config $ID) and its inside network config (eg /etc/network/interfaces on debian), and ideally also the outputs of `ip link`, `ip addr` and `ip route`, so we at least have something to go on ;)
 
Thanks for replying.

pct config 100

Code:
arch: amd64
cpulimit: 2
cpuunits: 1024
hostname: MySQL.psc.horizon.com
memory: 2048
net0: bridge=vmbr0,gw=192.168.2.244,hwaddr=C6:BD:DF:69:82:AC,ip=192.168.2.21/24,name=eth0,type=veth
ostype: ubuntu
rootfs: local:100/vm-100-disk-1.raw,size=16G
swap: 10240
/etc/network/interfaces

Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1
#iface eth1 inet auto

auto vmbr0
iface vmbr0 inet static
        address  192.168.2.20
        netmask  255.255.255.0
        gateway  192.168.2.244
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0
ip link
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:19:b9:ee:36:92 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 00:19:b9:ee:36:94 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 00:19:b9:ee:36:94 brd ff:ff:ff:ff:ff:ff
ip addr
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:19:b9:ee:36:92 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 00:19:b9:ee:36:94 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 00:19:b9:ee:36:94 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.20/24 brd 192.168.2.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::219:b9ff:feee:3694/64 scope link
       valid_lft forever preferred_lft forever
ip route
Code:
default via 192.168.2.244 dev vmbr0
192.168.2.0/24 dev vmbr0  proto kernel  scope link  src 192.168.2.20
 
Last edited by a moderator:
Those outputs are all from the host apparently? What about from inside the container? That the host's network was working was my assumption anyway as you'd probably report a different problem otherwise ;)
 
Yeah that was the host. Sorry I always answer first thing in the morning.

/etc/network/interfaces
Code:
auto lo
iface lo inet loopback

auto venet0:0
iface venet0:0 inet static
        address 192.168.2.25
        netmask 255.255.255.255

auto eth0
iface eth0 inet static
        address 192.168.2.25
        netmask 255.255.255.0
        gateway 192.168.2.244
ip link
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether c6:e4:bc:82:47:d7 brd ff:ff:ff:ff:ff:ff

ip addr
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether c6:e4:bc:82:47:d7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.25/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::c4e4:bcff:fe82:47d7/64 scope link
       valid_lft forever preferred_lft forever

ip route
Code:
default via 192.168.2.244 dev eth0
192.168.2.0/24 dev eth0  proto kernel  scope link  src 192.168.2.25
 
You probably want to remove the venet stanza from your interface config as with pve4 there's no venet anymore.

Is the firewall enabled? Did you add any rules to it?
When you listed the host's addresses in the other post I assume the container wasn't running, can you check that when it is running there's a veth${VMID}i0 line with "master vmbr0" in its ip link output?

Can you use tcpdump on the host's vmbr0 and veth${vmid}i0 and on the container's eth0 to see where the packets get lost?
 
I removed it. As far as I can tell it is not running. and I did not add any rules. They do have master vmbr0 in them.

I noticed something when trying to figure out how to tcpdump. If I left it running on a interface after a little bit that VM would get network access and take it away from the other VM.

Code:
root@test2:~# tcpdump -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
13:48:46.839710 ARP, Request who-has 192.168.2.200 tell 192.168.2.233, length 46
13:48:46.839728 ARP, Request who-has 192.168.2.202 tell 192.168.2.233, length 46
13:48:46.839733 ARP, Request who-has 192.168.2.232 tell 192.168.2.233, length 46
13:48:46.839738 ARP, Request who-has 192.168.2.237 tell 192.168.2.233, length 46
13:48:46.839743 ARP, Request who-has 192.168.2.247 tell 192.168.2.233, length 46
13:48:47.299956 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 8002.00:1f:c9:0b:e7:00.8034, length 42
13:48:47.343524 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 8003.00:1f:c9:0b:e7:00.8034, length 42

all the other tcpdump had TONs of data going by and I didnt know what I am looking for.

After all that I tried to install a fresh copy of Proxmox 4, with fresh LXC CTs and all the issues. Then I installed XenServer on another machine and none of my VMs have network access so Im starting to believe this might be an issue with my network, but that wouldnt explain why it was working before.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!