KVM Guest IPv6 issues

Aug 13, 2014
116
9
83
Johannesburg, South Africa
Hi Guys

We have recently upgrade our cluster from 3.4 to 4.2. We have now seen an issue that was not present in pve 3.4 using v3 kernel. See below details forthe pve 4.2

We never had any public ip's on our pve cluster or had any IPv6 address on it, this was only on the VM guest sides.

So here is the issue we see, I have a IPv6 address range xxxx:xxxx:0:1::/64 on my VM's and an external Router, so from all the machines I can Ping external IP's and the router, but I cannot ping any of the VM's

But If I disable and re-enable the interface on the Guest or do ifdown and if up of the interface on the linux VM's, the I can ping between the VM's, but if I check after say 30min, I cannot ping anymore until I down and up the interfaces.

We never had this issue on pve 3.4

Any suggestions ?

Router : xxxx:xxxx:0:1::1/64 (external router)

VM1 : xxxx:xxxx:0:1::4/64 (debian 8.4 )
VM2 : xxxx:xxxx:0:1::5/64 (debian 8.4 )
VM3 : xxxx:xxxx:0:1::8/64 (debian 8.4 )
VM4 : xxxx:xxxx:0:1::14/64 (windows 2008 R2)


# pveversion --verbose
proxmox-ve: 4.2-56 (running kernel: 4.4.13-1-pve)
pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.13-1-pve: 4.4.13-56
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-42
qemu-server: 4.0-83
pve-firmware: 1.1-8
libpve-common-perl: 4.0-70
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-55
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-19
pve-container: 1.0-70
pve-firewall: 2.0-29
pve-ha-manager: 1.0-32
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5.7-pve10~bpo80
# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
10.255.17.11 pve01.converged.local pve01 pvelocalhost
10.255.17.12 pve02.converged.local pve02
10.255.17.13 pve03.converged.local pve03
# The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
 
Hi Guys

Can anybody assist with this issue? I have looked at all the config, I cannot see why this is happening.

What is strange to me is why would all work fine for about 30 min after ifdown and if up of the IPv6 interface and then just stop working ?

I would really appreciate if someone can help, I do not want to reload my servers on a PVE 3.4 cluster, But at the moment this looks to me as my only solution.

regards
 
Hi Guys

I have done a test, and I seem the issue is related to PVE 4.2 Linux Bridge.

I have taken the 3rd Server in my cluster and reconfigured it to use Open vSwitch , I then migrated one of the 2 of the VM's mentioned in the 1st post to the server using Open vSwitch and the IPv6 Traffic flows as expected.

I Hope we can find out if this is a BUG or if I misconfigured something when using Linux Bridges ? Please see below config from Linux Bridge not working and working Open vSwitch

LINUX BRIDGE
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eth0 inet manual

# Storage
auto eth0.18
iface eth0.18 inet static
address 10.255.18.3
netmask 255.255.255.240


iface eth1 inet manual

# Corosync
auto eth2
iface eth2 inet static
address 10.255.17.13
netmask 255.255.255.240

iface eth3 inet manual

auto vmbr0
iface vmbr0 inet static
address 10.254.1.53
netmask 255.255.255.0
gateway 10.254.1.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_vlan_aware yes
#
pre-up ifconfig eth0 mtu 9000
pre-up ifconfig eth0.18 mtu 9000
pre-up ifconfig eth1 mtu 9000
pre-up ifconfig eth2 mtu 9000
pre-up ifconfig eth3 mtu 9000

OPEN vSWITCH

# Loopback interface
auto lo
iface lo inet loopback
#
pre-up ifconfig eth0 mtu 9000
pre-up ifconfig eth1 mtu 9000
pre-up ifconfig eth2 mtu 9000
pre-up ifconfig eth3 mtu 9000


# Bridge for our eth0 physical 10G interfaces and vlan virtual interfaces (our VMs will
# also attach to this bridge)
auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
# NOTE: we MUST mention eth0, vlan1, and vlan18 even though each
# of them lists ovs_bridge vmbr0! Not sure why it needs this
# kind of cross-referencing but it won't work without it!
ovs_ports eth0 vlan1 vlan18
mtu 9000
# Physical interface for traffic coming into the system. Retag untagged
# traffic into vlan 1, but pass through other tags.
auto eth0
allow-vmbr0 eth0
iface eth0 inet manual
ovs_bridge vmbr0
ovs_type OVSPort
mtu 9000
# Alternatively if you want to also restrict what vlans are allowed through
# you could use:
# ovs_options tag=1 vlan_mode=native-untagged trunks=10,20,30,40
# ovs_options vlan_mode=trunk

# Virtual interface to take advantage of originally untagged traffic
allow-vmbr0 vlan1
iface vlan1 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
#ovs_options tag=1
address 10.254.1.53
netmask 255.255.255.0
gateway 10.254.1.1
mtu 9000

# Storage cluster communication vlan (jumbo frames)
allow-vmbr0 vlan18
iface vlan18 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=18
address 10.255.18.3
netmask 255.255.255.240
mtu 9000
#
# Bridge for our eth2 physical 1G interfaces and vlan virtual interfaces, for corosync
auto vmbr17
allow-ovs vmbr17
iface vmbr17 inet manual
ovs_type OVSBridge
# NOTE: we MUST mention eth2, vlan17 even though each
# of them lists ovs_bridge vmbr17! Not sure why it needs this
# kind of cross-referencing but it won't work without it!
ovs_ports eth2 vlan17
mtu 9000

# Physical interface for traffic coming into the system. Retag untagged
# traffic into vlan 17, but pass through other tags.
auto eth2
allow-vmbr17 eth2
iface eth2 inet manual
ovs_bridge vmbr17
ovs_type OVSPort

# Virtual interface to take advantage of originally untagged traffic
allow-vmbr17 vlan17
iface vlan17 inet static
ovs_type OVSIntPort
ovs_bridge vmbr17
#ovs_options tag=17
address 10.255.17.13
netmask 255.255.255.240
mtu 9000
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!